Jan 29 16:11:54 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 16:11:55 crc restorecon[4760]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:55 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:56 crc restorecon[4760]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:11:56 crc restorecon[4760]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 16:11:56 crc kubenswrapper[4895]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:11:56 crc kubenswrapper[4895]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 16:11:56 crc kubenswrapper[4895]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:11:56 crc kubenswrapper[4895]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:11:56 crc kubenswrapper[4895]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:11:56 crc kubenswrapper[4895]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.757734 4895 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.762908 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.762939 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.762949 4895 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.762958 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.762967 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.762976 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.762985 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.762994 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763004 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763013 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763021 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763029 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763037 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763045 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763054 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763077 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763086 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763094 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763102 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763110 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763117 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763126 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763134 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763141 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763149 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763157 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763165 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763173 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763182 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763189 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763197 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763205 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763212 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763220 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763228 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763235 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763244 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763252 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763259 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763270 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763279 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763291 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763300 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763309 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763321 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763329 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763337 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763346 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763353 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763361 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763369 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763380 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763390 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763399 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763407 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763416 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763424 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763435 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763444 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763454 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763462 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763471 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763480 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763488 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763496 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763506 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763514 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763522 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763534 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763543 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.763552 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.763690 4895 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.763707 4895 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764705 4895 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764720 4895 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764732 4895 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764741 4895 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764754 4895 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764766 4895 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764776 4895 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764787 4895 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764797 4895 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764807 4895 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764817 4895 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764825 4895 flags.go:64] FLAG: --cgroup-root="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764834 4895 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764843 4895 flags.go:64] FLAG: --client-ca-file="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764852 4895 flags.go:64] FLAG: --cloud-config="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764862 4895 flags.go:64] FLAG: --cloud-provider="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764905 4895 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764916 4895 flags.go:64] FLAG: --cluster-domain="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764925 4895 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764935 4895 flags.go:64] FLAG: --config-dir="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764944 4895 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764955 4895 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764967 4895 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764976 4895 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764985 4895 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.764994 4895 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765004 4895 flags.go:64] FLAG: --contention-profiling="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765013 4895 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765022 4895 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765031 4895 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765040 4895 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765051 4895 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765063 4895 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765072 4895 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765081 4895 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765090 4895 flags.go:64] FLAG: --enable-server="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765099 4895 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765111 4895 flags.go:64] FLAG: --event-burst="100" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765121 4895 flags.go:64] FLAG: --event-qps="50" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765130 4895 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765139 4895 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765148 4895 flags.go:64] FLAG: --eviction-hard="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765159 4895 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765169 4895 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765178 4895 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765187 4895 flags.go:64] FLAG: --eviction-soft="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765196 4895 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765205 4895 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765214 4895 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765223 4895 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765231 4895 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765240 4895 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765249 4895 flags.go:64] FLAG: --feature-gates="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765260 4895 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765269 4895 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765278 4895 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765288 4895 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765297 4895 flags.go:64] FLAG: --healthz-port="10248" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765306 4895 flags.go:64] FLAG: --help="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765315 4895 flags.go:64] FLAG: --hostname-override="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765324 4895 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765333 4895 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765342 4895 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765351 4895 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765361 4895 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765370 4895 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765379 4895 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765388 4895 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765397 4895 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765406 4895 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765416 4895 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765425 4895 flags.go:64] FLAG: --kube-reserved="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765434 4895 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765443 4895 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765452 4895 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765461 4895 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765470 4895 flags.go:64] FLAG: --lock-file="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765478 4895 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765487 4895 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765497 4895 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765511 4895 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765519 4895 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765528 4895 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765537 4895 flags.go:64] FLAG: --logging-format="text" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765547 4895 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765557 4895 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765566 4895 flags.go:64] FLAG: --manifest-url="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765575 4895 flags.go:64] FLAG: --manifest-url-header="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765586 4895 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765595 4895 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765606 4895 flags.go:64] FLAG: --max-pods="110" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765615 4895 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765624 4895 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765633 4895 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765643 4895 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765651 4895 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765661 4895 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765670 4895 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765690 4895 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765699 4895 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765743 4895 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765753 4895 flags.go:64] FLAG: --pod-cidr="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765762 4895 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765776 4895 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765785 4895 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765795 4895 flags.go:64] FLAG: --pods-per-core="0" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765804 4895 flags.go:64] FLAG: --port="10250" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765814 4895 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765823 4895 flags.go:64] FLAG: --provider-id="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765833 4895 flags.go:64] FLAG: --qos-reserved="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765842 4895 flags.go:64] FLAG: --read-only-port="10255" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765851 4895 flags.go:64] FLAG: --register-node="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765861 4895 flags.go:64] FLAG: --register-schedulable="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765898 4895 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765914 4895 flags.go:64] FLAG: --registry-burst="10" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765926 4895 flags.go:64] FLAG: --registry-qps="5" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765937 4895 flags.go:64] FLAG: --reserved-cpus="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765947 4895 flags.go:64] FLAG: --reserved-memory="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765959 4895 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765970 4895 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765980 4895 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765989 4895 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.765999 4895 flags.go:64] FLAG: --runonce="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766008 4895 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766017 4895 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766028 4895 flags.go:64] FLAG: --seccomp-default="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766037 4895 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766046 4895 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766056 4895 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766078 4895 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766088 4895 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766097 4895 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766106 4895 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766116 4895 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766125 4895 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766135 4895 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766144 4895 flags.go:64] FLAG: --system-cgroups="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766154 4895 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766169 4895 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766178 4895 flags.go:64] FLAG: --tls-cert-file="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766187 4895 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766198 4895 flags.go:64] FLAG: --tls-min-version="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766206 4895 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766216 4895 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766225 4895 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766234 4895 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766244 4895 flags.go:64] FLAG: --v="2" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766256 4895 flags.go:64] FLAG: --version="false" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766267 4895 flags.go:64] FLAG: --vmodule="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766277 4895 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.766286 4895 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766552 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766564 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766573 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766582 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766591 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766599 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766608 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766616 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766624 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766632 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766644 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766652 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766659 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766667 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766675 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766683 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766691 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766698 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766706 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766714 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766721 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766729 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766737 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766745 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766756 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766764 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766772 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766780 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766787 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766795 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766803 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766812 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766821 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766831 4895 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766839 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766847 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766856 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766891 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766902 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766911 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766920 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766929 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766942 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766952 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766960 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766970 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766978 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766988 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.766996 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767004 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767013 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767022 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767030 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767037 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767045 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767053 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767064 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767072 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767079 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767087 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767095 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767102 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767110 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767121 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767131 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767140 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767148 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767157 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767165 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767174 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.767182 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.767204 4895 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.785533 4895 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.785589 4895 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785815 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785842 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785852 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785864 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785898 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785909 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785919 4895 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785929 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785938 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785947 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785956 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785965 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785973 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785982 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.785992 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786001 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786009 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786019 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786028 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786038 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786047 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786055 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786064 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786073 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786082 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786094 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786109 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786120 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786130 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786140 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786151 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786162 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786172 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786183 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786193 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786203 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786214 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786226 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786236 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786245 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786257 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786268 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786279 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786288 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786297 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786307 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786316 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786324 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786333 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786343 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786352 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786360 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786369 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786377 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786389 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786399 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786407 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786420 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786429 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786437 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786445 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786455 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786464 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786473 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786481 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786491 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786500 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786508 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786517 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786525 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786533 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.786547 4895 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.786999 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787016 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787025 4895 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787034 4895 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787043 4895 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787051 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787061 4895 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787070 4895 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787078 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787088 4895 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787096 4895 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787104 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787114 4895 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787123 4895 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787133 4895 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787142 4895 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787151 4895 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787159 4895 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787169 4895 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787178 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787186 4895 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787194 4895 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787202 4895 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787211 4895 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787219 4895 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787228 4895 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787237 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787246 4895 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787255 4895 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787263 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787272 4895 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787280 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787289 4895 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787297 4895 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787306 4895 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787315 4895 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787323 4895 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787331 4895 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787340 4895 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787351 4895 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787362 4895 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787371 4895 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787382 4895 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787392 4895 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787403 4895 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787413 4895 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787422 4895 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787431 4895 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787440 4895 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787448 4895 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787457 4895 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787466 4895 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787477 4895 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787488 4895 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787498 4895 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787508 4895 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787520 4895 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787530 4895 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787539 4895 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787549 4895 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787558 4895 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787568 4895 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787757 4895 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787766 4895 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787775 4895 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787784 4895 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787793 4895 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787802 4895 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787810 4895 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787819 4895 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.787828 4895 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.787841 4895 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.788140 4895 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.795489 4895 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.795623 4895 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.798892 4895 server.go:997] "Starting client certificate rotation" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.798926 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.800574 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-06 15:58:22.517458865 +0000 UTC Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.800902 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.830163 4895 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.833457 4895 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:11:56 crc kubenswrapper[4895]: E0129 16:11:56.834560 4895 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.851670 4895 log.go:25] "Validated CRI v1 runtime API" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.892181 4895 log.go:25] "Validated CRI v1 image API" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.894589 4895 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.905226 4895 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-16-07-30-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.905286 4895 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:49 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.931313 4895 manager.go:217] Machine: {Timestamp:2026-01-29 16:11:56.929018053 +0000 UTC m=+0.731995337 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:fe28fa87-b659-4e7e-881f-540611df3a38 BootID:d92b6098-6fec-422c-9ef8-93b6ed81f7f4 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:49 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:15:a2:4e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:15:a2:4e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fd:31:c2 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:30:cd:cd Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7b:90:e5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:48:68:2a Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:b0:79:4a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c2:58:1d:2a:d1:05 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:76:2f:d5:ca:af:0d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.931566 4895 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.931756 4895 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.933487 4895 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.933663 4895 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.933707 4895 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.933997 4895 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.934008 4895 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.934917 4895 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.934958 4895 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.935859 4895 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.936353 4895 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.941466 4895 kubelet.go:418] "Attempting to sync node with API server" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.941492 4895 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.941519 4895 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.941537 4895 kubelet.go:324] "Adding apiserver pod source" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.941551 4895 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.946723 4895 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.948272 4895 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.949786 4895 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.949825 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.949860 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:56 crc kubenswrapper[4895]: E0129 16:11:56.950012 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:56 crc kubenswrapper[4895]: E0129 16:11:56.950106 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951845 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951887 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951896 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951905 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951918 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951926 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951934 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951945 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951954 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951962 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951973 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.951980 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.952002 4895 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.952905 4895 server.go:1280] "Started kubelet" Jan 29 16:11:56 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.955207 4895 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.956258 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.955689 4895 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.958222 4895 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.960925 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.961118 4895 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.961393 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 15:19:43.427236577 +0000 UTC Jan 29 16:11:56 crc kubenswrapper[4895]: E0129 16:11:56.961762 4895 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.962120 4895 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.962094 4895 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.962166 4895 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.963354 4895 factory.go:55] Registering systemd factory Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.963380 4895 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.963358 4895 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.963839 4895 factory.go:153] Registering CRI-O factory Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.964076 4895 factory.go:221] Registration of the crio container factory successfully Jan 29 16:11:56 crc kubenswrapper[4895]: W0129 16:11:56.964293 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:56 crc kubenswrapper[4895]: E0129 16:11:56.969291 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.969313 4895 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.969398 4895 factory.go:103] Registering Raw factory Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.969444 4895 manager.go:1196] Started watching for new ooms in manager Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.974277 4895 manager.go:319] Starting recovery of all containers Jan 29 16:11:56 crc kubenswrapper[4895]: E0129 16:11:56.975999 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="200ms" Jan 29 16:11:56 crc kubenswrapper[4895]: E0129 16:11:56.975580 4895 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f3f9bcbd9c5cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:11:56.952421839 +0000 UTC m=+0.755399103,LastTimestamp:2026-01-29 16:11:56.952421839 +0000 UTC m=+0.755399103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.988533 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.989110 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.989256 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.989379 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.989557 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.989695 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.989899 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.990050 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.990219 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.990358 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.990481 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.990653 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.990795 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.990975 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.991145 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.991283 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.991440 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.991572 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.991705 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.991838 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.992028 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.992161 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.992295 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.992435 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.992559 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.992709 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.992858 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.993050 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.993197 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.993328 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.993456 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.993683 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.993862 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.994029 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.994163 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.994302 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.994443 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.994564 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.994744 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.994989 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.995160 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.995309 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.995462 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.995623 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.995753 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.995923 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.996063 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.996265 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.996432 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.996600 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.996748 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.996903 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.997580 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.997759 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.997983 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.998131 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.998258 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.998399 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.998523 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.998659 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.998823 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.998983 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.999125 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.999265 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.999424 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.999575 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.999725 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:11:56 crc kubenswrapper[4895]: I0129 16:11:56.999907 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.000081 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.000227 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.000351 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.000485 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.000619 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.000755 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.000930 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.001087 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.001227 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.001349 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.001481 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.001601 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.001727 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.001950 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.002099 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.002239 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.002358 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.002492 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.002611 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.002727 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.002843 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.003001 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.003120 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.003285 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:56.998656 4895 manager.go:324] Recovery completed Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004335 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004444 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004472 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004493 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004516 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004624 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004645 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004665 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004688 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004708 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004761 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004813 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.004982 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005044 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005067 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005091 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005115 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005136 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005165 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005188 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005210 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005278 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005296 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005315 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005385 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005407 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005426 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005446 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005467 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005525 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005546 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005564 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005582 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005602 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005622 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005650 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005676 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005730 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005749 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005768 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005831 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005854 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005907 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005935 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.005961 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006031 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006052 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006072 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006145 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006181 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006201 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006223 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006242 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006297 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006317 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006338 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006390 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006457 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006505 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006526 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006607 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006735 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006756 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006777 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006797 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006841 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006898 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006925 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.006946 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007033 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007053 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007072 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007097 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007115 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007134 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007152 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007170 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007202 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007242 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007277 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007303 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007351 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007369 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007394 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007413 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007461 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007482 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007500 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007517 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007539 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007556 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007592 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007611 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007632 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007651 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007669 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007686 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007708 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007727 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007754 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007793 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007812 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007831 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007851 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.007899 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012092 4895 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012149 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012174 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012194 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012216 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012235 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012254 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012272 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012292 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012350 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012405 4895 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012428 4895 reconstruct.go:97] "Volume reconstruction finished" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.012441 4895 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.024818 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.027974 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.028049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.028070 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.029286 4895 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.029320 4895 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.029353 4895 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.030515 4895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.034580 4895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.034794 4895 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.035049 4895 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 16:11:57 crc kubenswrapper[4895]: E0129 16:11:57.035587 4895 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:11:57 crc kubenswrapper[4895]: W0129 16:11:57.038286 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:57 crc kubenswrapper[4895]: E0129 16:11:57.038391 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.046617 4895 policy_none.go:49] "None policy: Start" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.047976 4895 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.048009 4895 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:11:57 crc kubenswrapper[4895]: E0129 16:11:57.062473 4895 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.112916 4895 manager.go:334] "Starting Device Plugin manager" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.112995 4895 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.113014 4895 server.go:79] "Starting device plugin registration server" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.113767 4895 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.113801 4895 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.114155 4895 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.114258 4895 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.114269 4895 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:11:57 crc kubenswrapper[4895]: E0129 16:11:57.124455 4895 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.135941 4895 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.136088 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.137768 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.137826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.137896 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.138112 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.138322 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.138364 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.139391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.139421 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.139431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.139603 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.139946 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.139998 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.140017 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.140071 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.140023 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.140493 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.140522 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.140530 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.140627 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.140824 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.140939 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.141321 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.141350 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.141360 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.141633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.141668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.141682 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.141829 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.142183 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.142252 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.143304 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.143353 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.143365 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.146325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.146360 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.146458 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.147148 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.147180 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.147195 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.147487 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.147576 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.150143 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.150201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.150214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: E0129 16:11:57.177222 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="400ms" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.214460 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.214600 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.214698 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.214760 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.214789 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.214853 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.214924 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.214993 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.215026 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.215106 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.215203 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.215273 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.215307 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.215370 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.215413 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.215502 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.216179 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.216246 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.216268 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.216309 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:11:57 crc kubenswrapper[4895]: E0129 16:11:57.217051 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317390 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317556 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317608 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317644 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317684 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317721 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317753 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317794 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317804 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317841 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317806 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317848 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317932 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317950 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317995 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317972 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317800 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317962 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.317980 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.318082 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.318152 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.318115 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.318214 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.318243 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.318265 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.318288 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.318411 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.318425 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.318477 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.318483 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.417922 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.420215 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.420294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.420314 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.420356 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:11:57 crc kubenswrapper[4895]: E0129 16:11:57.421056 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.465266 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.495142 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.515118 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: W0129 16:11:57.547493 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-506ad8e9590ba1e1f2236b604f93630e659b58087389ecb281680b6693c04ca7 WatchSource:0}: Error finding container 506ad8e9590ba1e1f2236b604f93630e659b58087389ecb281680b6693c04ca7: Status 404 returned error can't find the container with id 506ad8e9590ba1e1f2236b604f93630e659b58087389ecb281680b6693c04ca7 Jan 29 16:11:57 crc kubenswrapper[4895]: W0129 16:11:57.549826 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-39ddec801c568e7771fbc79eb8742e6246016adfa233366cccc0ef0eab9a39cb WatchSource:0}: Error finding container 39ddec801c568e7771fbc79eb8742e6246016adfa233366cccc0ef0eab9a39cb: Status 404 returned error can't find the container with id 39ddec801c568e7771fbc79eb8742e6246016adfa233366cccc0ef0eab9a39cb Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.550174 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.557689 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:11:57 crc kubenswrapper[4895]: E0129 16:11:57.580469 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="800ms" Jan 29 16:11:57 crc kubenswrapper[4895]: W0129 16:11:57.785486 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:57 crc kubenswrapper[4895]: E0129 16:11:57.785602 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.822208 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.824179 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.824220 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.824230 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.824252 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:11:57 crc kubenswrapper[4895]: E0129 16:11:57.824951 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Jan 29 16:11:57 crc kubenswrapper[4895]: W0129 16:11:57.932633 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:57 crc kubenswrapper[4895]: E0129 16:11:57.932777 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.957118 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:57 crc kubenswrapper[4895]: I0129 16:11:57.962210 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:36:36.955380843 +0000 UTC Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.041097 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"548414df191a15c9b1b2261c05044dca38f2480608d0334b3d161b40a4b1736e"} Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.042917 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"39ddec801c568e7771fbc79eb8742e6246016adfa233366cccc0ef0eab9a39cb"} Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.044202 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"506ad8e9590ba1e1f2236b604f93630e659b58087389ecb281680b6693c04ca7"} Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.045548 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"680c12ec4c97a53fcb52702a9ccf10aab79224849e7dbe0a68b3107564b05b29"} Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.046803 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d536f555e6a6e0268024b68fcbaaa79c9e88d0802dc3dd9a43bc9d7948974767"} Jan 29 16:11:58 crc kubenswrapper[4895]: W0129 16:11:58.181393 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:58 crc kubenswrapper[4895]: E0129 16:11:58.181947 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:58 crc kubenswrapper[4895]: W0129 16:11:58.334663 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:58 crc kubenswrapper[4895]: E0129 16:11:58.334801 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:58 crc kubenswrapper[4895]: E0129 16:11:58.383020 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="1.6s" Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.625580 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.628215 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.628285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.628311 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.628359 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:11:58 crc kubenswrapper[4895]: E0129 16:11:58.629359 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.844287 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:11:58 crc kubenswrapper[4895]: E0129 16:11:58.845730 4895 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.957551 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:58 crc kubenswrapper[4895]: I0129 16:11:58.963063 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:00:55.881445499 +0000 UTC Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.054652 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639" exitCode=0 Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.054721 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639"} Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.054916 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.056577 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.056646 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.056671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.057475 4895 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4" exitCode=0 Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.057555 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4"} Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.057730 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.058887 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.059948 4895 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="175c0738c330d681e7ec13e820f6fcc4da8ac35e2d68a709129175de33674b49" exitCode=0 Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.060013 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.060060 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"175c0738c330d681e7ec13e820f6fcc4da8ac35e2d68a709129175de33674b49"} Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.060265 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.060342 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.060357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.060300 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.060408 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.060431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.060715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.060749 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.060766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.062559 4895 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2" exitCode=0 Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.062902 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.062908 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2"} Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.065412 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.065453 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.065475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.070819 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a"} Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.070954 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58"} Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.070987 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2"} Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.071015 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e"} Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.070989 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.072353 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.072408 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.072427 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.520894 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.532480 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:11:59 crc kubenswrapper[4895]: W0129 16:11:59.615898 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:59 crc kubenswrapper[4895]: E0129 16:11:59.616109 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:59 crc kubenswrapper[4895]: W0129 16:11:59.703179 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:59 crc kubenswrapper[4895]: E0129 16:11:59.703251 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.957115 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:11:59 crc kubenswrapper[4895]: I0129 16:11:59.963400 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:33:35.445702023 +0000 UTC Jan 29 16:11:59 crc kubenswrapper[4895]: E0129 16:11:59.985050 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="3.2s" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.079170 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105"} Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.079239 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e"} Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.079255 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b"} Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.079268 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca"} Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.083933 4895 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066" exitCode=0 Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.084006 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066"} Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.084188 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:00 crc kubenswrapper[4895]: W0129 16:12:00.084903 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:12:00 crc kubenswrapper[4895]: E0129 16:12:00.085004 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.086293 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.086362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.086379 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.089702 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"cfdd97891eacceb393b3a6113b10bfbe443b4d81e539897f258148fec4429dc2"} Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.089759 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.090967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.090986 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.090994 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.094811 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.095401 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.098084 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b587eb8a168e54cd2c1bef70157580a7d20cd8e378311a33996f3867de251ae6"} Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.098137 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c95fd8ff11220a00309543179b8c46c74ab7fd2f75a54ce16541ed217121747c"} Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.098156 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e68014e55c516cafe98fd65b5e162f27e8abb4af9675a2aaf6fecb16377fb35e"} Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.098794 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.098831 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.098851 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.100013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.100042 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.100061 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.230569 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.232329 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.232382 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.232398 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.232441 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:12:00 crc kubenswrapper[4895]: E0129 16:12:00.233179 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Jan 29 16:12:00 crc kubenswrapper[4895]: W0129 16:12:00.436275 4895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Jan 29 16:12:00 crc kubenswrapper[4895]: E0129 16:12:00.436393 4895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:12:00 crc kubenswrapper[4895]: I0129 16:12:00.964011 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:06:58.073526476 +0000 UTC Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.102726 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb"} Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.102816 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.103903 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.103952 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.103990 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.106210 4895 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e" exitCode=0 Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.106251 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e"} Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.106305 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.106320 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.106349 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.106427 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.106438 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.106522 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108109 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108131 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108144 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108168 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108207 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108225 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108233 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108811 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108839 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.108853 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:01 crc kubenswrapper[4895]: I0129 16:12:01.964700 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:49:54.400914364 +0000 UTC Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.113508 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.113513 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8"} Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.113570 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.113589 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384"} Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.113625 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e"} Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.118847 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.119081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.119240 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.251317 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.251667 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.253265 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.253311 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.253328 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.374178 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.957303 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:12:02 crc kubenswrapper[4895]: I0129 16:12:02.965246 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:30:19.551700289 +0000 UTC Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.124491 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.124573 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.125254 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1"} Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.125343 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b"} Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.125363 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.125995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.126078 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.126104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.126952 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.127007 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.127028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.433969 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.435478 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.435513 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.435524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.435551 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:12:03 crc kubenswrapper[4895]: I0129 16:12:03.965764 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:06:51.109370913 +0000 UTC Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.058702 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.128606 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.128646 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.128699 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.130326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.130373 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.130391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.130691 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.130774 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.130801 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.383775 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.384202 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.386689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.386757 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.386780 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:04 crc kubenswrapper[4895]: I0129 16:12:04.965973 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:07:57.757837104 +0000 UTC Jan 29 16:12:05 crc kubenswrapper[4895]: I0129 16:12:05.612244 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:12:05 crc kubenswrapper[4895]: I0129 16:12:05.612766 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:05 crc kubenswrapper[4895]: I0129 16:12:05.615007 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:05 crc kubenswrapper[4895]: I0129 16:12:05.615088 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:05 crc kubenswrapper[4895]: I0129 16:12:05.615107 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:05 crc kubenswrapper[4895]: I0129 16:12:05.937035 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:12:05 crc kubenswrapper[4895]: I0129 16:12:05.937322 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:05 crc kubenswrapper[4895]: I0129 16:12:05.939344 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:05 crc kubenswrapper[4895]: I0129 16:12:05.939405 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:05 crc kubenswrapper[4895]: I0129 16:12:05.939428 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:05 crc kubenswrapper[4895]: I0129 16:12:05.966847 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:17:28.749083926 +0000 UTC Jan 29 16:12:06 crc kubenswrapper[4895]: I0129 16:12:06.086474 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:12:06 crc kubenswrapper[4895]: I0129 16:12:06.086780 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:06 crc kubenswrapper[4895]: I0129 16:12:06.089174 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:06 crc kubenswrapper[4895]: I0129 16:12:06.089265 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:06 crc kubenswrapper[4895]: I0129 16:12:06.089294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:06 crc kubenswrapper[4895]: I0129 16:12:06.967267 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:48:00.393460658 +0000 UTC Jan 29 16:12:07 crc kubenswrapper[4895]: E0129 16:12:07.124828 4895 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 16:12:07 crc kubenswrapper[4895]: I0129 16:12:07.443728 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 16:12:07 crc kubenswrapper[4895]: I0129 16:12:07.444072 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:07 crc kubenswrapper[4895]: I0129 16:12:07.446129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:07 crc kubenswrapper[4895]: I0129 16:12:07.446201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:07 crc kubenswrapper[4895]: I0129 16:12:07.446228 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:07 crc kubenswrapper[4895]: I0129 16:12:07.967695 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:29:10.143418733 +0000 UTC Jan 29 16:12:08 crc kubenswrapper[4895]: I0129 16:12:08.612666 4895 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 16:12:08 crc kubenswrapper[4895]: I0129 16:12:08.612809 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:12:08 crc kubenswrapper[4895]: I0129 16:12:08.968707 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:00:41.220482814 +0000 UTC Jan 29 16:12:09 crc kubenswrapper[4895]: I0129 16:12:09.969127 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:21:04.223754607 +0000 UTC Jan 29 16:12:10 crc kubenswrapper[4895]: I0129 16:12:10.180012 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 16:12:10 crc kubenswrapper[4895]: I0129 16:12:10.180249 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:10 crc kubenswrapper[4895]: I0129 16:12:10.181799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:10 crc kubenswrapper[4895]: I0129 16:12:10.181850 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:10 crc kubenswrapper[4895]: I0129 16:12:10.181891 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:10 crc kubenswrapper[4895]: I0129 16:12:10.958246 4895 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 29 16:12:10 crc kubenswrapper[4895]: I0129 16:12:10.969536 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:23:04.815465446 +0000 UTC Jan 29 16:12:11 crc kubenswrapper[4895]: I0129 16:12:11.251172 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 16:12:11 crc kubenswrapper[4895]: I0129 16:12:11.251262 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 16:12:11 crc kubenswrapper[4895]: I0129 16:12:11.260770 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 16:12:11 crc kubenswrapper[4895]: I0129 16:12:11.260912 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 16:12:11 crc kubenswrapper[4895]: I0129 16:12:11.970559 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:02:03.030640036 +0000 UTC Jan 29 16:12:12 crc kubenswrapper[4895]: I0129 16:12:12.262789 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:12:12 crc kubenswrapper[4895]: I0129 16:12:12.263187 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:12 crc kubenswrapper[4895]: I0129 16:12:12.265100 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:12 crc kubenswrapper[4895]: I0129 16:12:12.265173 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:12 crc kubenswrapper[4895]: I0129 16:12:12.265199 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:12 crc kubenswrapper[4895]: I0129 16:12:12.380144 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]log ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]etcd ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/priority-and-fairness-filter ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/start-apiextensions-informers ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/start-apiextensions-controllers ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/crd-informer-synced ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/start-system-namespaces-controller ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 29 16:12:12 crc kubenswrapper[4895]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/bootstrap-controller ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/start-kube-aggregator-informers ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/apiservice-registration-controller ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/apiservice-discovery-controller ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]autoregister-completion ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/apiservice-openapi-controller ok Jan 29 16:12:12 crc kubenswrapper[4895]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 29 16:12:12 crc kubenswrapper[4895]: livez check failed Jan 29 16:12:12 crc kubenswrapper[4895]: I0129 16:12:12.380243 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:12:12 crc kubenswrapper[4895]: I0129 16:12:12.972080 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:32:35.710971244 +0000 UTC Jan 29 16:12:13 crc kubenswrapper[4895]: I0129 16:12:13.973083 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:15:03.85086484 +0000 UTC Jan 29 16:12:14 crc kubenswrapper[4895]: I0129 16:12:14.974097 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:13:49.913337694 +0000 UTC Jan 29 16:12:15 crc kubenswrapper[4895]: I0129 16:12:15.975294 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:41:29.909525538 +0000 UTC Jan 29 16:12:16 crc kubenswrapper[4895]: E0129 16:12:16.261273 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.264697 4895 trace.go:236] Trace[1278243401]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:12:05.600) (total time: 10663ms): Jan 29 16:12:16 crc kubenswrapper[4895]: Trace[1278243401]: ---"Objects listed" error: 10663ms (16:12:16.264) Jan 29 16:12:16 crc kubenswrapper[4895]: Trace[1278243401]: [10.663973049s] [10.663973049s] END Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.264745 4895 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:12:16 crc kubenswrapper[4895]: E0129 16:12:16.267036 4895 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.269102 4895 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.269157 4895 trace.go:236] Trace[418314452]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:12:04.272) (total time: 11996ms): Jan 29 16:12:16 crc kubenswrapper[4895]: Trace[418314452]: ---"Objects listed" error: 11996ms (16:12:16.268) Jan 29 16:12:16 crc kubenswrapper[4895]: Trace[418314452]: [11.996302672s] [11.996302672s] END Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.269207 4895 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.269678 4895 trace.go:236] Trace[1387072870]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:12:05.264) (total time: 11005ms): Jan 29 16:12:16 crc kubenswrapper[4895]: Trace[1387072870]: ---"Objects listed" error: 11005ms (16:12:16.269) Jan 29 16:12:16 crc kubenswrapper[4895]: Trace[1387072870]: [11.005257676s] [11.005257676s] END Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.269825 4895 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.271345 4895 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.281848 4895 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.332343 4895 csr.go:261] certificate signing request csr-5jnf6 is approved, waiting to be issued Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.360033 4895 csr.go:257] certificate signing request csr-5jnf6 is issued Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.568532 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57738->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.568562 4895 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52478->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.568607 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57738->192.168.126.11:17697: read: connection reset by peer" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.568665 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52478->192.168.126.11:17697: read: connection reset by peer" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.776036 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.780051 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.798192 4895 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:12:16 crc kubenswrapper[4895]: W0129 16:12:16.798552 4895 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:12:16 crc kubenswrapper[4895]: W0129 16:12:16.798605 4895 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:12:16 crc kubenswrapper[4895]: W0129 16:12:16.798635 4895 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:12:16 crc kubenswrapper[4895]: W0129 16:12:16.798645 4895 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:12:16 crc kubenswrapper[4895]: E0129 16:12:16.798740 4895 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods\": read tcp 38.102.83.110:33592->38.102.83.110:6443: use of closed network connection" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.954486 4895 apiserver.go:52] "Watching apiserver" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.964453 4895 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.965692 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-ovn-kubernetes/ovnkube-node-j8c5m","openshift-machine-config-operator/machine-config-daemon-qh8vw","openshift-multus/multus-7p5vp","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-lqtb8","openshift-image-registry/node-ca-s4vrx","openshift-multus/multus-additional-cni-plugins-8h44k","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.966523 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.966637 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.966687 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.966757 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:16 crc kubenswrapper[4895]: E0129 16:12:16.966755 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:16 crc kubenswrapper[4895]: E0129 16:12:16.966813 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.967094 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.967329 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.967474 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.967598 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:16 crc kubenswrapper[4895]: E0129 16:12:16.967738 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.967859 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7p5vp" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.968314 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.968413 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lqtb8" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.968419 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-s4vrx" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.969422 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.969609 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.970033 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.969857 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.969917 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.969973 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.972493 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.972637 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.972565 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.975641 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.975826 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.975827 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.975853 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.975941 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976008 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976039 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976091 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976101 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976198 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976202 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976203 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:17:22.794005751 +0000 UTC Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976245 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976300 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976378 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976845 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976845 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.976986 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.977112 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.977205 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.977226 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.977296 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.977297 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.977394 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.977453 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.977537 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 16:12:16 crc kubenswrapper[4895]: I0129 16:12:16.977715 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.010598 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.028704 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.043238 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.056437 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.064058 4895 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.071715 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072011 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072146 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072160 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072227 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072253 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072279 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072304 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072327 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072353 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072378 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072401 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072426 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072450 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072474 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072812 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.072981 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073005 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073089 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073190 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073197 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073310 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073341 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073371 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073402 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073340 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073436 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073428 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073313 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073393 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073469 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073502 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073571 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073601 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073615 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073603 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073628 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073752 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073654 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073888 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073842 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073953 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.073992 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074017 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074047 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074076 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074102 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074125 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074148 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074171 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074198 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074219 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074243 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074265 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074290 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074310 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074331 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074336 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074357 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074363 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074385 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074414 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074435 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074458 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074479 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074482 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074503 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074528 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074515 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074557 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074583 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074607 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074630 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074638 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074641 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074658 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074708 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074728 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074764 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074795 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074834 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074888 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074939 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.074977 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075013 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075015 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075031 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075029 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075046 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075146 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075166 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075181 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075218 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075288 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075343 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075400 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075398 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075439 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075476 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075513 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075534 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075548 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075589 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075628 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075664 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075704 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075738 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075775 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075810 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075845 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.076837 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.076919 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.076959 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.076997 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077043 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077084 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077120 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077158 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077193 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077227 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077266 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077304 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077339 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077374 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077411 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077488 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077528 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077571 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077614 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077651 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077691 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077727 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077761 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077804 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077840 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077906 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.077942 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078015 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078062 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078099 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078144 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078179 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078218 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078256 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078293 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078334 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078371 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078417 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078461 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078499 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078539 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078686 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078739 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078780 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078823 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078861 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078952 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078990 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079032 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079087 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079134 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079173 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079210 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079247 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079283 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079318 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079354 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079390 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079426 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079472 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079508 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079546 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079632 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079714 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079752 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079791 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079829 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079895 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079933 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.079969 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080011 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080048 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080088 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080126 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080164 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080205 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080243 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080290 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080326 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080363 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080400 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080438 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080477 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080515 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080554 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080594 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080644 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080684 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080725 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080765 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080833 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080905 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080970 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081046 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081092 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081149 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081201 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081247 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081288 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081340 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081378 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081416 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081456 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081495 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081533 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081574 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081614 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081657 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081708 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081767 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081806 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081848 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081946 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082020 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082085 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082214 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-slash\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082570 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-node-log\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082648 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-log-socket\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082701 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovn-node-metrics-cert\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082765 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0e805b13-f27f-4252-a4aa-22689d6dc656-serviceca\") pod \"node-ca-s4vrx\" (UID: \"0e805b13-f27f-4252-a4aa-22689d6dc656\") " pod="openshift-image-registry/node-ca-s4vrx" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082814 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwbnt\" (UniqueName: \"kubernetes.io/projected/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-kube-api-access-bwbnt\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082905 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-netns\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082956 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rqjm\" (UniqueName: \"kubernetes.io/projected/9af81de5-cf3e-4437-b9c1-32ef1495f362-kube-api-access-9rqjm\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082994 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-cni-binary-copy\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083030 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-run-k8s-cni-cncf-io\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083068 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e805b13-f27f-4252-a4aa-22689d6dc656-host\") pod \"node-ca-s4vrx\" (UID: \"0e805b13-f27f-4252-a4aa-22689d6dc656\") " pod="openshift-image-registry/node-ca-s4vrx" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083117 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssnsr\" (UniqueName: \"kubernetes.io/projected/0e805b13-f27f-4252-a4aa-22689d6dc656-kube-api-access-ssnsr\") pod \"node-ca-s4vrx\" (UID: \"0e805b13-f27f-4252-a4aa-22689d6dc656\") " pod="openshift-image-registry/node-ca-s4vrx" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083152 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-daemon-config\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083187 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-systemd-units\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083230 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-socket-dir-parent\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083266 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-hostroot\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083320 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083371 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-ovn-kubernetes\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083415 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-run-netns\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083461 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm6qt\" (UniqueName: \"kubernetes.io/projected/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-kube-api-access-vm6qt\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083502 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-systemd\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083537 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-script-lib\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083583 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.085684 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.085744 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.085784 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-config\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.085812 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9af81de5-cf3e-4437-b9c1-32ef1495f362-rootfs\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.085902 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv4d5\" (UniqueName: \"kubernetes.io/projected/f95c0cb8-ec4a-4478-abe6-ccfd24db2b97-kube-api-access-jv4d5\") pod \"node-resolver-lqtb8\" (UID: \"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\") " pod="openshift-dns/node-resolver-lqtb8" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.085946 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9af81de5-cf3e-4437-b9c1-32ef1495f362-mcd-auth-proxy-config\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.085976 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-var-lib-cni-multus\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086007 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086039 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-system-cni-dir\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086063 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-openvswitch\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086084 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-netd\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086108 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086131 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086159 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086182 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-conf-dir\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086203 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-run-multus-certs\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086224 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086248 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086271 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086294 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086313 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-os-release\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086498 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086527 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-cni-binary-copy\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086560 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086584 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-env-overrides\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086606 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6f9b\" (UniqueName: \"kubernetes.io/projected/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-kube-api-access-b6f9b\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086630 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9af81de5-cf3e-4437-b9c1-32ef1495f362-proxy-tls\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086711 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086740 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-etc-openvswitch\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086762 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-ovn\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086785 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086806 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f95c0cb8-ec4a-4478-abe6-ccfd24db2b97-hosts-file\") pod \"node-resolver-lqtb8\" (UID: \"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\") " pod="openshift-dns/node-resolver-lqtb8" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086824 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-os-release\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086844 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-var-lib-kubelet\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087080 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-cnibin\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087113 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-var-lib-cni-bin\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087133 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-etc-kubernetes\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087156 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-cnibin\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087177 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-var-lib-openvswitch\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087202 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-cni-dir\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087234 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087483 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-system-cni-dir\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087512 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-kubelet\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087529 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-bin\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087794 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087813 4895 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087828 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087840 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087856 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087881 4895 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087895 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087911 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087924 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087937 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087948 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087964 4895 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087975 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087987 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087999 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088012 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088024 4895 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088036 4895 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088222 4895 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088241 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088253 4895 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088263 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088277 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088288 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088299 4895 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088313 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088326 4895 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088337 4895 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088350 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088361 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.093500 4895 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.093831 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078995 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075530 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075549 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075576 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075583 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075800 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.075852 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.076148 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078240 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078367 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078475 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078652 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078691 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078859 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.078972 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080504 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080525 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.080643 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081172 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081238 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081418 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081783 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.081918 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082306 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082595 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082659 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.082680 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083041 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083159 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083576 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083443 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.097706 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.085089 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.085462 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083599 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.085737 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.085958 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086294 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086338 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.086978 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087021 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087149 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087211 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.087568 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.083207 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088082 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088203 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088379 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.088629 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.089068 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.089151 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.089232 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.089474 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.089505 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.089597 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.089883 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.090365 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.090511 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.090542 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.090731 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.090840 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.590817347 +0000 UTC m=+21.393794611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.099242 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.090950 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.093021 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.099287 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.093068 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.093278 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.094030 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.094103 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.094425 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.094421 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.094962 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.095172 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.095304 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.095492 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.095637 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.095850 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.095909 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.096017 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.096256 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.096605 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.096957 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.099475 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.099608 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.100167 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.100337 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.100473 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.100527 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.100665 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.098748 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.101061 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.101417 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.101094 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.101484 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.600850054 +0000 UTC m=+21.403827318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.101743 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.101802 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.601776291 +0000 UTC m=+21.404753565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.101858 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.101881 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.103146 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.103420 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.104091 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.107854 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.110412 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.109353 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.109819 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.111396 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.111446 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.109955 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.111738 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.111764 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.111839 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.611817167 +0000 UTC m=+21.414794441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.110394 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.110828 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.110951 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.112127 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:17.612084485 +0000 UTC m=+21.415061759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.112342 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.112508 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.112560 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.112943 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.115441 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.115745 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.116074 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.116284 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.116576 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.117044 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.117095 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.117490 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.117652 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.118076 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.118331 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.118418 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.118909 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.119398 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.119404 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.119477 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.119483 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.120171 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.120394 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.122607 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.122629 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.123050 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.124141 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.124413 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.124448 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.124605 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.125199 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.125427 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.125538 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.125787 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.125932 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.130634 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.130671 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.130576 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.130989 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.131615 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.131705 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.132502 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.132563 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.132591 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.132784 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.132910 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.132982 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.133457 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.133512 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.133547 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.133559 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.133666 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.134374 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.134709 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.135023 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.135437 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.135449 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.135759 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.139289 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.139517 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.139610 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.141411 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.141394 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.144771 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.145778 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.146457 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.146609 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.146754 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.148029 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.154251 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.159275 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.161530 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.165122 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.169135 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.170817 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.174807 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb" exitCode=255 Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.174929 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb"} Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.180784 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.181523 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.181170 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189146 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-systemd-units\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189191 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-socket-dir-parent\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189210 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-hostroot\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189228 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-ovn-kubernetes\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189245 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-run-netns\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189263 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm6qt\" (UniqueName: \"kubernetes.io/projected/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-kube-api-access-vm6qt\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189294 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-systemd\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189317 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-systemd-units\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189424 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-socket-dir-parent\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189466 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-hostroot\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189492 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-ovn-kubernetes\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189518 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-run-netns\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189310 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-script-lib\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.189807 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-systemd\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.190139 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv4d5\" (UniqueName: \"kubernetes.io/projected/f95c0cb8-ec4a-4478-abe6-ccfd24db2b97-kube-api-access-jv4d5\") pod \"node-resolver-lqtb8\" (UID: \"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\") " pod="openshift-dns/node-resolver-lqtb8" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.190261 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.190411 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-config\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.190348 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.190597 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9af81de5-cf3e-4437-b9c1-32ef1495f362-rootfs\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.190700 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9af81de5-cf3e-4437-b9c1-32ef1495f362-mcd-auth-proxy-config\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.190797 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-var-lib-cni-multus\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.190930 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-system-cni-dir\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191031 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-openvswitch\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191131 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-netd\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191237 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191348 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-conf-dir\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191450 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-run-multus-certs\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.190654 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9af81de5-cf3e-4437-b9c1-32ef1495f362-rootfs\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191534 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-var-lib-cni-multus\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191583 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191596 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9af81de5-cf3e-4437-b9c1-32ef1495f362-mcd-auth-proxy-config\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.190480 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-script-lib\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191621 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-netd\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191621 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-conf-dir\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191621 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-system-cni-dir\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191653 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-openvswitch\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191138 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-config\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191652 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-run-multus-certs\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.191994 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192095 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192279 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-os-release\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192401 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-cni-binary-copy\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192500 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-env-overrides\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192589 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192595 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-etc-openvswitch\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192637 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-ovn\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192659 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192680 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6f9b\" (UniqueName: \"kubernetes.io/projected/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-kube-api-access-b6f9b\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192698 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9af81de5-cf3e-4437-b9c1-32ef1495f362-proxy-tls\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192718 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f95c0cb8-ec4a-4478-abe6-ccfd24db2b97-hosts-file\") pod \"node-resolver-lqtb8\" (UID: \"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\") " pod="openshift-dns/node-resolver-lqtb8" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192736 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-os-release\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192753 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-var-lib-kubelet\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192775 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-cnibin\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192793 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-var-lib-openvswitch\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192837 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-cni-dir\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192855 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-cnibin\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192887 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-var-lib-cni-bin\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192905 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-etc-kubernetes\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192927 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-kubelet\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192944 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-bin\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192963 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-system-cni-dir\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.192983 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0e805b13-f27f-4252-a4aa-22689d6dc656-serviceca\") pod \"node-ca-s4vrx\" (UID: \"0e805b13-f27f-4252-a4aa-22689d6dc656\") " pod="openshift-image-registry/node-ca-s4vrx" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193003 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwbnt\" (UniqueName: \"kubernetes.io/projected/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-kube-api-access-bwbnt\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-slash\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193040 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-os-release\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193051 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-node-log\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193088 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-ovn\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193096 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-log-socket\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193119 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovn-node-metrics-cert\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193147 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e805b13-f27f-4252-a4aa-22689d6dc656-host\") pod \"node-ca-s4vrx\" (UID: \"0e805b13-f27f-4252-a4aa-22689d6dc656\") " pod="openshift-image-registry/node-ca-s4vrx" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193171 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssnsr\" (UniqueName: \"kubernetes.io/projected/0e805b13-f27f-4252-a4aa-22689d6dc656-kube-api-access-ssnsr\") pod \"node-ca-s4vrx\" (UID: \"0e805b13-f27f-4252-a4aa-22689d6dc656\") " pod="openshift-image-registry/node-ca-s4vrx" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193194 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-netns\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193216 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rqjm\" (UniqueName: \"kubernetes.io/projected/9af81de5-cf3e-4437-b9c1-32ef1495f362-kube-api-access-9rqjm\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193240 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-cni-binary-copy\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193265 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-run-k8s-cni-cncf-io\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193291 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-daemon-config\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193332 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193428 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193448 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193463 4895 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193477 4895 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193492 4895 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193505 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193519 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193532 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193546 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193559 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193574 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193608 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e805b13-f27f-4252-a4aa-22689d6dc656-host\") pod \"node-ca-s4vrx\" (UID: \"0e805b13-f27f-4252-a4aa-22689d6dc656\") " pod="openshift-image-registry/node-ca-s4vrx" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193652 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.194494 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-cni-binary-copy\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.194504 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-etc-openvswitch\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.194653 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.193070 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-node-log\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.194922 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.194957 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-log-socket\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195080 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-cnibin\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195126 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-var-lib-openvswitch\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195085 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-etc-kubernetes\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195231 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-cni-dir\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195309 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-system-cni-dir\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195348 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-netns\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195600 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f95c0cb8-ec4a-4478-abe6-ccfd24db2b97-hosts-file\") pod \"node-resolver-lqtb8\" (UID: \"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\") " pod="openshift-dns/node-resolver-lqtb8" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195707 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-kubelet\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195685 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-os-release\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195747 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-var-lib-kubelet\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195768 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-bin\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.195936 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-var-lib-cni-bin\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.196435 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-cni-binary-copy\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.196451 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-slash\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.196579 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-multus-daemon-config\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.196699 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-host-run-k8s-cni-cncf-io\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.196780 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0e805b13-f27f-4252-a4aa-22689d6dc656-serviceca\") pod \"node-ca-s4vrx\" (UID: \"0e805b13-f27f-4252-a4aa-22689d6dc656\") " pod="openshift-image-registry/node-ca-s4vrx" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.197320 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-env-overrides\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.197421 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-cnibin\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.199201 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovn-node-metrics-cert\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.199353 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9af81de5-cf3e-4437-b9c1-32ef1495f362-proxy-tls\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200080 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200192 4895 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200217 4895 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200227 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200237 4895 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200247 4895 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200256 4895 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200268 4895 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200277 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200287 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200296 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200305 4895 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200314 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200324 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200334 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200345 4895 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200354 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200364 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200373 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200381 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200391 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200400 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200409 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200417 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200425 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200435 4895 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200446 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200454 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200462 4895 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200470 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200480 4895 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200488 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200498 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200507 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200516 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200524 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200534 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200543 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200554 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200563 4895 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200571 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200579 4895 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200588 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200599 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200609 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200617 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200625 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200633 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200643 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200651 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200659 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200668 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200677 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200686 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200695 4895 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200707 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200716 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200725 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200736 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200745 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200756 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200767 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200776 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200785 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200795 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200804 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200827 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200838 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200846 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200858 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200883 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200892 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200902 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200914 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200923 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200931 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200941 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200952 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200964 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200975 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200987 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.200999 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201011 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201023 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201034 4895 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201044 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201091 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201105 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201117 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201128 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201139 4895 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201150 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201161 4895 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201173 4895 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201184 4895 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201195 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201206 4895 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201216 4895 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201228 4895 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201238 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201249 4895 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201262 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201279 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201292 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201305 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201319 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201333 4895 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201346 4895 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201356 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201367 4895 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201380 4895 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201391 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201401 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201412 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201423 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201432 4895 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201442 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201453 4895 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201462 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201474 4895 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201484 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201496 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201506 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201516 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201527 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201539 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201551 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201564 4895 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201575 4895 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201586 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201596 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201606 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201615 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201625 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201635 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201645 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201655 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201665 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201676 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201686 4895 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201696 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201706 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201716 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201726 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201746 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201758 4895 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201769 4895 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201781 4895 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201792 4895 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201803 4895 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.201813 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.203826 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.209175 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv4d5\" (UniqueName: \"kubernetes.io/projected/f95c0cb8-ec4a-4478-abe6-ccfd24db2b97-kube-api-access-jv4d5\") pod \"node-resolver-lqtb8\" (UID: \"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\") " pod="openshift-dns/node-resolver-lqtb8" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.211020 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6f9b\" (UniqueName: \"kubernetes.io/projected/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-kube-api-access-b6f9b\") pod \"ovnkube-node-j8c5m\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.213141 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.214499 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rqjm\" (UniqueName: \"kubernetes.io/projected/9af81de5-cf3e-4437-b9c1-32ef1495f362-kube-api-access-9rqjm\") pod \"machine-config-daemon-qh8vw\" (UID: \"9af81de5-cf3e-4437-b9c1-32ef1495f362\") " pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.214821 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm6qt\" (UniqueName: \"kubernetes.io/projected/dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5-kube-api-access-vm6qt\") pod \"multus-7p5vp\" (UID: \"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\") " pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.215301 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssnsr\" (UniqueName: \"kubernetes.io/projected/0e805b13-f27f-4252-a4aa-22689d6dc656-kube-api-access-ssnsr\") pod \"node-ca-s4vrx\" (UID: \"0e805b13-f27f-4252-a4aa-22689d6dc656\") " pod="openshift-image-registry/node-ca-s4vrx" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.216426 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwbnt\" (UniqueName: \"kubernetes.io/projected/7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b-kube-api-access-bwbnt\") pod \"multus-additional-cni-plugins-8h44k\" (UID: \"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\") " pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.224198 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.245901 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.277687 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.286862 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.295954 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.307113 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.307330 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.320602 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.328245 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.329319 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: W0129 16:12:17.334059 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-44181fa1df40add7a662de5a61bd20084ee0fcd8df90f4c1c0b5e1ec0be978c7 WatchSource:0}: Error finding container 44181fa1df40add7a662de5a61bd20084ee0fcd8df90f4c1c0b5e1ec0be978c7: Status 404 returned error can't find the container with id 44181fa1df40add7a662de5a61bd20084ee0fcd8df90f4c1c0b5e1ec0be978c7 Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.337151 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7p5vp" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.353251 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8h44k" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.354348 4895 scope.go:117] "RemoveContainer" containerID="608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.354978 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.360048 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lqtb8" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.362964 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 16:07:16 +0000 UTC, rotation deadline is 2026-11-19 22:14:18.617209595 +0000 UTC Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.363007 4895 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7062h2m1.254204432s for next certificate rotation Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.364243 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.369686 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-s4vrx" Jan 29 16:12:17 crc kubenswrapper[4895]: W0129 16:12:17.372428 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb00f5c7f_4264_4580_9c5a_ace62ee4b87d.slice/crio-3b7e375e6e89086852c6f5d9e2950640eaf608eeaa609351c26b9859442b8154 WatchSource:0}: Error finding container 3b7e375e6e89086852c6f5d9e2950640eaf608eeaa609351c26b9859442b8154: Status 404 returned error can't find the container with id 3b7e375e6e89086852c6f5d9e2950640eaf608eeaa609351c26b9859442b8154 Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.381968 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.384165 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.395936 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.407037 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: W0129 16:12:17.418448 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf95c0cb8_ec4a_4478_abe6_ccfd24db2b97.slice/crio-482db621acd5d9689dbbbf5522c216df0149e9807439c8eebe4100de0d6f3cb7 WatchSource:0}: Error finding container 482db621acd5d9689dbbbf5522c216df0149e9807439c8eebe4100de0d6f3cb7: Status 404 returned error can't find the container with id 482db621acd5d9689dbbbf5522c216df0149e9807439c8eebe4100de0d6f3cb7 Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.426796 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.438093 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.453541 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: W0129 16:12:17.456849 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e805b13_f27f_4252_a4aa_22689d6dc656.slice/crio-2d215d8cf51e83684a81d2aab979327ccf7254a7f40f2a9c53a84dc694e3dbbc WatchSource:0}: Error finding container 2d215d8cf51e83684a81d2aab979327ccf7254a7f40f2a9c53a84dc694e3dbbc: Status 404 returned error can't find the container with id 2d215d8cf51e83684a81d2aab979327ccf7254a7f40f2a9c53a84dc694e3dbbc Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.468252 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.481721 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.509785 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.526144 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.538507 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.553272 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.567791 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.582006 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.599148 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.606434 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.606580 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.606620 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.606588681 +0000 UTC m=+22.409565935 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.606696 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.606727 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.606743 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.606726205 +0000 UTC m=+22.409703469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.606901 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.606938 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.606931041 +0000 UTC m=+22.409908305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.614975 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.629434 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.645444 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.659162 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.679408 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.692744 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.708200 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.708255 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.708441 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.708474 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.708487 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.708540 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.708524847 +0000 UTC m=+22.511502111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.708892 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.708910 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.708919 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:17 crc kubenswrapper[4895]: E0129 16:12:17.708943 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:18.708933958 +0000 UTC m=+22.511911222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.727376 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.747277 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:17 crc kubenswrapper[4895]: I0129 16:12:17.977253 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:11:07.409774535 +0000 UTC Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.181860 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca" exitCode=0 Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.181935 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.182001 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"3b7e375e6e89086852c6f5d9e2950640eaf608eeaa609351c26b9859442b8154"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.184359 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lqtb8" event={"ID":"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97","Type":"ContainerStarted","Data":"dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.184419 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lqtb8" event={"ID":"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97","Type":"ContainerStarted","Data":"482db621acd5d9689dbbbf5522c216df0149e9807439c8eebe4100de0d6f3cb7"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.186387 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-s4vrx" event={"ID":"0e805b13-f27f-4252-a4aa-22689d6dc656","Type":"ContainerStarted","Data":"5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.186473 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-s4vrx" event={"ID":"0e805b13-f27f-4252-a4aa-22689d6dc656","Type":"ContainerStarted","Data":"2d215d8cf51e83684a81d2aab979327ccf7254a7f40f2a9c53a84dc694e3dbbc"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.189490 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" event={"ID":"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b","Type":"ContainerStarted","Data":"d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.189995 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" event={"ID":"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b","Type":"ContainerStarted","Data":"9aea7c6b5b57a2c6313a6590bde75c139a008ab32c7d686799b1c26b8552ef28"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.191791 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.191830 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.191848 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8b8a22e947a87c4a8b0460f961f965350430b8fbef7beda054883361dc92dae3"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.198895 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.198979 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"44181fa1df40add7a662de5a61bd20084ee0fcd8df90f4c1c0b5e1ec0be978c7"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.201324 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.203399 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.206315 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.206666 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.208465 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.208498 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.208511 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"f23110a78391e3ab822bd98d39e6382e03e077e12fe651dcd082dfe4fac1a140"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.210734 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7p5vp" event={"ID":"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5","Type":"ContainerStarted","Data":"e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.210805 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7p5vp" event={"ID":"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5","Type":"ContainerStarted","Data":"906e2a80767567cd6b65b1242a2f70206c46eb3670e6aa0b3efbac5e07bfea65"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.213721 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1893a3c98a69a4e7f78a69da5835afc6abfbd3d2aad614394041a3d9b346944c"} Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.215212 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.215713 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.230233 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.240595 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.252015 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.262904 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.274167 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.290757 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.310061 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.329179 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.355212 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.374594 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.399632 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.422196 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.448073 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.464775 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.480761 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.499960 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.516380 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.530779 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.550995 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.589025 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.619324 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.619480 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.619529 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.619670 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.619760 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.619736442 +0000 UTC m=+24.422713706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.620468 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.620424893 +0000 UTC m=+24.423402157 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.620471 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.620585 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.620573667 +0000 UTC m=+24.423551141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.626367 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.670214 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.707372 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.721087 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.721141 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.721307 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.721329 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.721366 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.721390 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.721437 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.721440 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.721422922 +0000 UTC m=+24.524400186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.721453 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:18 crc kubenswrapper[4895]: E0129 16:12:18.721530 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:20.721507294 +0000 UTC m=+24.524484558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.744422 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.782958 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.832093 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:18 crc kubenswrapper[4895]: I0129 16:12:18.977576 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:45:35.831136106 +0000 UTC Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.036392 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:19 crc kubenswrapper[4895]: E0129 16:12:19.036609 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.037261 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:19 crc kubenswrapper[4895]: E0129 16:12:19.037421 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.037660 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:19 crc kubenswrapper[4895]: E0129 16:12:19.037766 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.040397 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.041258 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.042663 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.043386 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.044462 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.045046 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.045701 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.046969 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.047888 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.048908 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.049483 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.051070 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.051690 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.053228 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.056715 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.058146 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.060947 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.063404 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.065116 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.067141 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.067953 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.069613 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.070344 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.072033 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.072856 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.073779 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.075453 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.076213 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.077747 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.078638 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.081190 4895 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.081339 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.083921 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.085342 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.086353 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.088985 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.090141 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.091562 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.092550 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.094138 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.094887 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.096365 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.097353 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.099119 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.099791 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.101132 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.102105 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.103960 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.104692 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.106006 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.106689 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.108245 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.109083 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.109828 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.220478 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3"} Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.220527 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1"} Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.220539 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617"} Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.220547 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433"} Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.220557 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c"} Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.222361 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b" containerID="d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee" exitCode=0 Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.223529 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" event={"ID":"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b","Type":"ContainerDied","Data":"d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee"} Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.266752 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.292050 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.309189 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.326779 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.345420 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.362054 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.393800 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.413781 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.428158 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.445463 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.462148 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.475899 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.489723 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.514813 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:19Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:19 crc kubenswrapper[4895]: I0129 16:12:19.978314 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:09:29.329178204 +0000 UTC Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.222635 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.232921 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b" containerID="878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4" exitCode=0 Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.233038 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" event={"ID":"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b","Type":"ContainerDied","Data":"878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4"} Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.240087 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74"} Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.246278 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.246775 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.250267 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.273108 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.298568 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.323836 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.344381 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.356691 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.369836 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.384670 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.400447 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.412761 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.431490 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.446157 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.458398 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.471165 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.485464 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.499139 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.509801 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.536230 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.554026 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.579639 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.601612 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.625077 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.636396 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.640640 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.640836 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.640918 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.640855749 +0000 UTC m=+28.443833063 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.640967 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.641018 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.641124 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.641099266 +0000 UTC m=+28.444076720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.641194 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.641269 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.6412502 +0000 UTC m=+28.444227504 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.652504 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.667730 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.685788 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.701925 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.719484 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.741712 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.741819 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:20Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.742074 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.742406 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.742436 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.742311 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.742504 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.742527 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.742498236 +0000 UTC m=+28.545475530 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.742541 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.742567 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:20 crc kubenswrapper[4895]: E0129 16:12:20.742664 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:24.74263622 +0000 UTC m=+28.545613524 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:20 crc kubenswrapper[4895]: I0129 16:12:20.979567 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:32:32.929701468 +0000 UTC Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.036339 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.036440 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:21 crc kubenswrapper[4895]: E0129 16:12:21.036636 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.036693 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:21 crc kubenswrapper[4895]: E0129 16:12:21.036804 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:21 crc kubenswrapper[4895]: E0129 16:12:21.036941 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.247475 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6"} Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.250401 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b" containerID="7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c" exitCode=0 Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.250489 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" event={"ID":"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b","Type":"ContainerDied","Data":"7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c"} Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.270169 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.287382 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.308529 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.330325 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.345382 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.358862 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.378493 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.393254 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.413666 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.429792 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.444157 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.461066 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.478186 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.495688 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.511211 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.529215 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.545306 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.561972 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.584333 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.608183 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.625790 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.640145 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.666438 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.684367 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.697138 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.716466 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.737027 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.754102 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.771270 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.791240 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:21Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:21 crc kubenswrapper[4895]: I0129 16:12:21.980119 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:13:08.952165778 +0000 UTC Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.258740 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b" containerID="1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198" exitCode=0 Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.258831 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" event={"ID":"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b","Type":"ContainerDied","Data":"1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198"} Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.266536 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace"} Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.285108 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.300666 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.320428 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.350555 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.373614 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.389534 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.406968 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.423021 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.438826 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.461177 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.486257 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.499671 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.516801 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.530835 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.549145 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.667773 4895 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.670622 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.670683 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.670698 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.670895 4895 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.684793 4895 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.685280 4895 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.686904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.686979 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.687000 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.687029 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.687055 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:22Z","lastTransitionTime":"2026-01-29T16:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:22 crc kubenswrapper[4895]: E0129 16:12:22.709325 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.720355 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.720423 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.720445 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.720475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.720498 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:22Z","lastTransitionTime":"2026-01-29T16:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:22 crc kubenswrapper[4895]: E0129 16:12:22.738345 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.744316 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.744374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.744385 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.744403 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.744414 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:22Z","lastTransitionTime":"2026-01-29T16:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:22 crc kubenswrapper[4895]: E0129 16:12:22.761074 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.766642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.766713 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.766732 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.766785 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.766805 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:22Z","lastTransitionTime":"2026-01-29T16:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:22 crc kubenswrapper[4895]: E0129 16:12:22.786099 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.791699 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.791781 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.791808 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.791849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.791914 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:22Z","lastTransitionTime":"2026-01-29T16:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:22 crc kubenswrapper[4895]: E0129 16:12:22.811910 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:22 crc kubenswrapper[4895]: E0129 16:12:22.812198 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.814499 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.814535 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.814550 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.814575 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.814593 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:22Z","lastTransitionTime":"2026-01-29T16:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.917507 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.917568 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.917582 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.917601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.917616 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:22Z","lastTransitionTime":"2026-01-29T16:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:22 crc kubenswrapper[4895]: I0129 16:12:22.980712 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:19:58.38701035 +0000 UTC Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.029411 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.029463 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.029476 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.029504 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.029519 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:23Z","lastTransitionTime":"2026-01-29T16:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.036315 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.036402 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.036330 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:23 crc kubenswrapper[4895]: E0129 16:12:23.036524 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:23 crc kubenswrapper[4895]: E0129 16:12:23.036646 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:23 crc kubenswrapper[4895]: E0129 16:12:23.036770 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.133126 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.133189 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.133200 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.133219 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.133231 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:23Z","lastTransitionTime":"2026-01-29T16:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.236939 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.237013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.237034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.237069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.237097 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:23Z","lastTransitionTime":"2026-01-29T16:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.275154 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b" containerID="50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209" exitCode=0 Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.275211 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" event={"ID":"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b","Type":"ContainerDied","Data":"50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209"} Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.298994 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.314310 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.338213 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.340436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.340485 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.340503 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.340543 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.340563 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:23Z","lastTransitionTime":"2026-01-29T16:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.357328 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.371628 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.387462 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.412306 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.418454 4895 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.428805 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.448813 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.448904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.448923 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.448946 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.448961 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:23Z","lastTransitionTime":"2026-01-29T16:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.457570 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.474122 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.484930 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.497638 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.509601 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.526521 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.540948 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.552318 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.552363 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.552374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.552393 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.552406 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:23Z","lastTransitionTime":"2026-01-29T16:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.654583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.654654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.654681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.654716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.654738 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:23Z","lastTransitionTime":"2026-01-29T16:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.757227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.757279 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.757293 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.757314 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.757328 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:23Z","lastTransitionTime":"2026-01-29T16:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.788048 4895 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.861979 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.862033 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.862050 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.862071 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.862085 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:23Z","lastTransitionTime":"2026-01-29T16:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.965277 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.965343 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.965356 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.965379 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.965394 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:23Z","lastTransitionTime":"2026-01-29T16:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:23 crc kubenswrapper[4895]: I0129 16:12:23.981808 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:42:24.026173149 +0000 UTC Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.069602 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.069681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.069708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.069741 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.069766 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:24Z","lastTransitionTime":"2026-01-29T16:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.173340 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.173415 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.173433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.173858 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.173937 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:24Z","lastTransitionTime":"2026-01-29T16:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.279558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.280182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.280208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.280250 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.280277 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:24Z","lastTransitionTime":"2026-01-29T16:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.289746 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a"} Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.290138 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.290170 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.297756 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b" containerID="d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a" exitCode=0 Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.297817 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" event={"ID":"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b","Type":"ContainerDied","Data":"d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a"} Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.311142 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.336796 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.354408 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.365846 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.376844 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.379051 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.385133 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.385165 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.385176 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.385193 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.385205 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:24Z","lastTransitionTime":"2026-01-29T16:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.389405 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.403118 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.421489 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.438229 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.457096 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.476767 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.489787 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.491893 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.491943 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.491955 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.491983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.492003 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:24Z","lastTransitionTime":"2026-01-29T16:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.512545 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.537681 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.555765 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.570708 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.592998 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.595660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.595731 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.595750 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.595780 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.595798 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:24Z","lastTransitionTime":"2026-01-29T16:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.617969 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.641595 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.667195 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.683959 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.686283 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.686462 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.686512 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.686676 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.686694 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.686731 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.686665343 +0000 UTC m=+36.489642677 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.686803 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.686778967 +0000 UTC m=+36.489756231 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.686828 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.686816928 +0000 UTC m=+36.489794192 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.699030 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.699095 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.699114 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.699142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.699161 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:24Z","lastTransitionTime":"2026-01-29T16:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.720028 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.740123 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.760980 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.787330 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.787402 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.787610 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.787650 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.787670 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.787610 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.787754 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.787725894 +0000 UTC m=+36.590703168 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.787766 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.787785 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:24 crc kubenswrapper[4895]: E0129 16:12:24.787839 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.787819417 +0000 UTC m=+36.590796691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.792776 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.802461 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.802520 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.802538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.802569 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.802593 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:24Z","lastTransitionTime":"2026-01-29T16:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.812542 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.826446 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.842138 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.859420 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.874190 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.885690 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.906673 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.906950 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.906966 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.906991 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.907009 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:24Z","lastTransitionTime":"2026-01-29T16:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:24 crc kubenswrapper[4895]: I0129 16:12:24.982403 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:28:54.91622824 +0000 UTC Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.010290 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.010342 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.010357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.010381 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.010397 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:25Z","lastTransitionTime":"2026-01-29T16:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.036713 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.036810 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:25 crc kubenswrapper[4895]: E0129 16:12:25.036842 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.036899 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:25 crc kubenswrapper[4895]: E0129 16:12:25.036974 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:25 crc kubenswrapper[4895]: E0129 16:12:25.037060 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.113246 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.113305 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.113325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.113350 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.113366 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:25Z","lastTransitionTime":"2026-01-29T16:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.216573 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.216640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.216660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.216692 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.216710 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:25Z","lastTransitionTime":"2026-01-29T16:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.305079 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.305040 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" event={"ID":"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b","Type":"ContainerStarted","Data":"8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013"} Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.319986 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.320034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.320047 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.320067 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.320080 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:25Z","lastTransitionTime":"2026-01-29T16:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.331207 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.356158 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.374338 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.391527 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.410504 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.422840 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.422908 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.422919 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.422938 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.422948 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:25Z","lastTransitionTime":"2026-01-29T16:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.427139 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.441355 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.458420 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.474196 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.491070 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.508361 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.523485 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.525682 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.525747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.525760 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.525782 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.525796 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:25Z","lastTransitionTime":"2026-01-29T16:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.538914 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.569941 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.588938 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.628337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.628379 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.628388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.628401 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.628411 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:25Z","lastTransitionTime":"2026-01-29T16:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.683092 4895 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.731583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.731640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.731656 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.731677 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.731691 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:25Z","lastTransitionTime":"2026-01-29T16:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.836519 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.836578 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.836591 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.836620 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.836637 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:25Z","lastTransitionTime":"2026-01-29T16:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.940542 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.940600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.940617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.940643 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.940661 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:25Z","lastTransitionTime":"2026-01-29T16:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:25 crc kubenswrapper[4895]: I0129 16:12:25.983166 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:20:57.524913489 +0000 UTC Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.003018 4895 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.043669 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.043767 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.043788 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.043815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.043835 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:26Z","lastTransitionTime":"2026-01-29T16:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.146987 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.147037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.147047 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.147066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.147077 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:26Z","lastTransitionTime":"2026-01-29T16:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.250491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.250542 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.250553 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.250572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.250585 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:26Z","lastTransitionTime":"2026-01-29T16:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.308215 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.354061 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.354123 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.354134 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.354157 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.354171 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:26Z","lastTransitionTime":"2026-01-29T16:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.456520 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.456576 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.456590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.456609 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.456622 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:26Z","lastTransitionTime":"2026-01-29T16:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.559171 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.559204 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.559216 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.559234 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.559246 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:26Z","lastTransitionTime":"2026-01-29T16:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.661903 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.661942 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.661950 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.661967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.661977 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:26Z","lastTransitionTime":"2026-01-29T16:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.764640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.764708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.764722 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.764745 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.764757 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:26Z","lastTransitionTime":"2026-01-29T16:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.867846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.867911 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.867925 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.867944 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.867956 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:26Z","lastTransitionTime":"2026-01-29T16:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.970917 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.970988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.971007 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.971037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.971060 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:26Z","lastTransitionTime":"2026-01-29T16:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:26 crc kubenswrapper[4895]: I0129 16:12:26.983475 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:30:33.238201146 +0000 UTC Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.035932 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.036017 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:27 crc kubenswrapper[4895]: E0129 16:12:27.036075 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:27 crc kubenswrapper[4895]: E0129 16:12:27.036140 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.036233 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:27 crc kubenswrapper[4895]: E0129 16:12:27.036424 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.053951 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.068239 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.074439 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.074476 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.074486 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.074506 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.074518 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:27Z","lastTransitionTime":"2026-01-29T16:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.090526 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.108030 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.125917 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.148163 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.168525 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.178104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.178179 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.178191 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.178239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.178256 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:27Z","lastTransitionTime":"2026-01-29T16:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.187549 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.203312 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.227604 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.245469 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.256767 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.270358 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.281175 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.281237 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.281251 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.281271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.281283 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:27Z","lastTransitionTime":"2026-01-29T16:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.284665 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.296260 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.313664 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/0.log" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.318971 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a" exitCode=1 Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.319065 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a"} Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.320659 4895 scope.go:117] "RemoveContainer" containerID="e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.338233 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.356131 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.374737 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.385260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.385691 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.385830 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.385995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.386142 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:27Z","lastTransitionTime":"2026-01-29T16:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.393523 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.405958 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.418247 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.446566 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.468126 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.485497 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.488593 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.488629 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.488645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.488673 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.488690 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:27Z","lastTransitionTime":"2026-01-29T16:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.513261 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:26Z\\\",\\\"message\\\":\\\" 6193 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:12:26.715800 6193 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:12:26.715819 6193 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:12:26.715837 6193 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:12:26.715869 6193 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:12:26.715880 6193 factory.go:656] Stopping watch factory\\\\nI0129 16:12:26.715883 6193 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:12:26.715867 6193 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:12:26.715914 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:12:26.715970 6193 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 16:12:26.715994 6193 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:12:26.716141 6193 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.531345 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.545803 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.558483 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.573451 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.589659 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:27Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.591834 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.591882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.591896 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.591933 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.591946 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:27Z","lastTransitionTime":"2026-01-29T16:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.694509 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.694580 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.694599 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.694640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.694657 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:27Z","lastTransitionTime":"2026-01-29T16:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.798465 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.798526 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.798544 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.798565 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.798580 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:27Z","lastTransitionTime":"2026-01-29T16:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.901537 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.901621 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.901637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.901658 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.901673 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:27Z","lastTransitionTime":"2026-01-29T16:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:27 crc kubenswrapper[4895]: I0129 16:12:27.984211 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:54:09.48278593 +0000 UTC Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.004485 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.004532 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.004548 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.004572 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.004587 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:28Z","lastTransitionTime":"2026-01-29T16:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.107838 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.107930 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.107948 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.107976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.107992 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:28Z","lastTransitionTime":"2026-01-29T16:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.210950 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.210995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.211009 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.211028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.211038 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:28Z","lastTransitionTime":"2026-01-29T16:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.319424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.319489 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.319502 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.319531 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.319544 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:28Z","lastTransitionTime":"2026-01-29T16:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.324313 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/1.log" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.325177 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/0.log" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.329162 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e" exitCode=1 Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.329229 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e"} Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.329288 4895 scope.go:117] "RemoveContainer" containerID="e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.330106 4895 scope.go:117] "RemoveContainer" containerID="2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e" Jan 29 16:12:28 crc kubenswrapper[4895]: E0129 16:12:28.330271 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.349763 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.364312 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.377349 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.404591 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.422366 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.422399 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.422408 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.422424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.422433 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:28Z","lastTransitionTime":"2026-01-29T16:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.453014 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.480005 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:26Z\\\",\\\"message\\\":\\\" 6193 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:12:26.715800 6193 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:12:26.715819 6193 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:12:26.715837 6193 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:12:26.715869 6193 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:12:26.715880 6193 factory.go:656] Stopping watch factory\\\\nI0129 16:12:26.715883 6193 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:12:26.715867 6193 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:12:26.715914 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:12:26.715970 6193 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 16:12:26.715994 6193 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:12:26.716141 6193 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:28Z\\\",\\\"message\\\":\\\"],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 16:12:28.281623 6342 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0129 16:12:28.281664 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:12:28.281671 6342 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default ar\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.494505 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.506913 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.519988 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.524909 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.524956 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.524970 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.524990 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.525005 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:28Z","lastTransitionTime":"2026-01-29T16:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.532957 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.547220 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.560682 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.574083 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.584372 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.597703 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.627986 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.628047 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.628061 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.628081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.628095 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:28Z","lastTransitionTime":"2026-01-29T16:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.731648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.731706 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.731720 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.731755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.731777 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:28Z","lastTransitionTime":"2026-01-29T16:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.834336 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.834485 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.834503 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.834530 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.834545 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:28Z","lastTransitionTime":"2026-01-29T16:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.937952 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.938001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.938016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.938038 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.938052 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:28Z","lastTransitionTime":"2026-01-29T16:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:28 crc kubenswrapper[4895]: I0129 16:12:28.984987 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:29:00.206256522 +0000 UTC Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.036507 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.036574 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.036638 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:29 crc kubenswrapper[4895]: E0129 16:12:29.036741 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:29 crc kubenswrapper[4895]: E0129 16:12:29.036981 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:29 crc kubenswrapper[4895]: E0129 16:12:29.037155 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.041222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.041264 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.041277 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.041357 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.041377 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:29Z","lastTransitionTime":"2026-01-29T16:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.147077 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.147135 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.147148 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.147169 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.147182 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:29Z","lastTransitionTime":"2026-01-29T16:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.250187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.250287 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.250308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.250341 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.250359 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:29Z","lastTransitionTime":"2026-01-29T16:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.337110 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/1.log" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.352492 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.352554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.352567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.352588 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.352603 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:29Z","lastTransitionTime":"2026-01-29T16:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.456394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.456460 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.456475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.456498 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.456517 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:29Z","lastTransitionTime":"2026-01-29T16:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.559748 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.559824 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.559847 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.559926 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.559954 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:29Z","lastTransitionTime":"2026-01-29T16:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.663904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.663981 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.664000 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.664027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.664060 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:29Z","lastTransitionTime":"2026-01-29T16:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.767447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.767512 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.767526 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.767548 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.767562 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:29Z","lastTransitionTime":"2026-01-29T16:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.870203 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.870283 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.870309 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.870335 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.870354 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:29Z","lastTransitionTime":"2026-01-29T16:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.973430 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.973493 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.973506 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.973532 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.973546 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:29Z","lastTransitionTime":"2026-01-29T16:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:29 crc kubenswrapper[4895]: I0129 16:12:29.986196 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:11:13.297155337 +0000 UTC Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.077033 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.077074 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.077083 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.077099 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.077109 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:30Z","lastTransitionTime":"2026-01-29T16:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.093109 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f"] Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.093977 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.097718 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.098061 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.109358 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.134221 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.150069 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.167088 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.180942 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.181010 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.181043 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.181078 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.181101 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:30Z","lastTransitionTime":"2026-01-29T16:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.186831 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.204283 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.222487 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.241547 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.249885 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/16645b28-c655-4cea-b3cb-f522232734f2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.250002 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/16645b28-c655-4cea-b3cb-f522232734f2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.250142 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfd4d\" (UniqueName: \"kubernetes.io/projected/16645b28-c655-4cea-b3cb-f522232734f2-kube-api-access-pfd4d\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.250840 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/16645b28-c655-4cea-b3cb-f522232734f2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.267422 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.284125 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.284193 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.284209 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.284233 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.284249 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:30Z","lastTransitionTime":"2026-01-29T16:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.288469 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.310546 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:26Z\\\",\\\"message\\\":\\\" 6193 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:12:26.715800 6193 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:12:26.715819 6193 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:12:26.715837 6193 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:12:26.715869 6193 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:12:26.715880 6193 factory.go:656] Stopping watch factory\\\\nI0129 16:12:26.715883 6193 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:12:26.715867 6193 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:12:26.715914 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:12:26.715970 6193 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 16:12:26.715994 6193 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:12:26.716141 6193 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:28Z\\\",\\\"message\\\":\\\"],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 16:12:28.281623 6342 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0129 16:12:28.281664 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:12:28.281671 6342 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default ar\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.326258 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.345263 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.351827 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/16645b28-c655-4cea-b3cb-f522232734f2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.351908 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/16645b28-c655-4cea-b3cb-f522232734f2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.351938 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/16645b28-c655-4cea-b3cb-f522232734f2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.351965 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfd4d\" (UniqueName: \"kubernetes.io/projected/16645b28-c655-4cea-b3cb-f522232734f2-kube-api-access-pfd4d\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.352936 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/16645b28-c655-4cea-b3cb-f522232734f2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.353172 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/16645b28-c655-4cea-b3cb-f522232734f2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.359056 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/16645b28-c655-4cea-b3cb-f522232734f2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.364486 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.370009 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfd4d\" (UniqueName: \"kubernetes.io/projected/16645b28-c655-4cea-b3cb-f522232734f2-kube-api-access-pfd4d\") pod \"ovnkube-control-plane-749d76644c-wr56f\" (UID: \"16645b28-c655-4cea-b3cb-f522232734f2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.381862 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.387085 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.387148 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.387162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.387187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.387273 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:30Z","lastTransitionTime":"2026-01-29T16:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.401285 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.415259 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" Jan 29 16:12:30 crc kubenswrapper[4895]: W0129 16:12:30.436015 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16645b28_c655_4cea_b3cb_f522232734f2.slice/crio-a0c8ff5c00fde9ff77bc5c8d6a6ac9e820fd51865a5eecbe9c2e94a69b9a21b4 WatchSource:0}: Error finding container a0c8ff5c00fde9ff77bc5c8d6a6ac9e820fd51865a5eecbe9c2e94a69b9a21b4: Status 404 returned error can't find the container with id a0c8ff5c00fde9ff77bc5c8d6a6ac9e820fd51865a5eecbe9c2e94a69b9a21b4 Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.491087 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.491128 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.491141 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.491160 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.491176 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:30Z","lastTransitionTime":"2026-01-29T16:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.593965 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.594018 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.594032 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.594056 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.594072 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:30Z","lastTransitionTime":"2026-01-29T16:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.698090 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.698141 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.698153 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.698169 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.698180 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:30Z","lastTransitionTime":"2026-01-29T16:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.800391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.800450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.800461 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.800483 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.800494 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:30Z","lastTransitionTime":"2026-01-29T16:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.903948 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.904368 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.904391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.904414 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.904427 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:30Z","lastTransitionTime":"2026-01-29T16:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:30 crc kubenswrapper[4895]: I0129 16:12:30.987105 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:29:49.774299143 +0000 UTC Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.007850 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.007915 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.007925 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.007940 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.007950 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:31Z","lastTransitionTime":"2026-01-29T16:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.036656 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.036660 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.036839 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:31 crc kubenswrapper[4895]: E0129 16:12:31.037023 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:31 crc kubenswrapper[4895]: E0129 16:12:31.037134 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:31 crc kubenswrapper[4895]: E0129 16:12:31.037261 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.110510 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.110551 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.110559 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.110576 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.110590 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:31Z","lastTransitionTime":"2026-01-29T16:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.214029 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.214093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.214113 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.214136 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.214150 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:31Z","lastTransitionTime":"2026-01-29T16:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.219460 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-h9mkw"] Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.222147 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:31 crc kubenswrapper[4895]: E0129 16:12:31.222316 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.241602 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.257325 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.276991 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.299968 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.317483 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.317759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.317786 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.317825 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.317853 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:31Z","lastTransitionTime":"2026-01-29T16:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.318004 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.337172 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.352233 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" event={"ID":"16645b28-c655-4cea-b3cb-f522232734f2","Type":"ContainerStarted","Data":"f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.352541 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" event={"ID":"16645b28-c655-4cea-b3cb-f522232734f2","Type":"ContainerStarted","Data":"21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.352628 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" event={"ID":"16645b28-c655-4cea-b3cb-f522232734f2","Type":"ContainerStarted","Data":"a0c8ff5c00fde9ff77bc5c8d6a6ac9e820fd51865a5eecbe9c2e94a69b9a21b4"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.355353 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.363383 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.363452 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jnfg\" (UniqueName: \"kubernetes.io/projected/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-kube-api-access-7jnfg\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.369494 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.389797 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.403636 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.417355 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.420172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.420216 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.420233 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.420260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.420275 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:31Z","lastTransitionTime":"2026-01-29T16:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.437062 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.452183 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.464573 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.464619 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jnfg\" (UniqueName: \"kubernetes.io/projected/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-kube-api-access-7jnfg\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:31 crc kubenswrapper[4895]: E0129 16:12:31.464851 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:31 crc kubenswrapper[4895]: E0129 16:12:31.464971 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs podName:5113e2b8-dc97-42a1-aa1c-3d604cada8c2 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:31.964946983 +0000 UTC m=+35.767924247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs") pod "network-metrics-daemon-h9mkw" (UID: "5113e2b8-dc97-42a1-aa1c-3d604cada8c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.475187 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:26Z\\\",\\\"message\\\":\\\" 6193 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:12:26.715800 6193 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:12:26.715819 6193 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:12:26.715837 6193 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:12:26.715869 6193 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:12:26.715880 6193 factory.go:656] Stopping watch factory\\\\nI0129 16:12:26.715883 6193 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:12:26.715867 6193 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:12:26.715914 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:12:26.715970 6193 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 16:12:26.715994 6193 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:12:26.716141 6193 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:28Z\\\",\\\"message\\\":\\\"],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 16:12:28.281623 6342 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0129 16:12:28.281664 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:12:28.281671 6342 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default ar\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.480747 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jnfg\" (UniqueName: \"kubernetes.io/projected/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-kube-api-access-7jnfg\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.491049 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.504610 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.522053 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.523289 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.523328 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.523340 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.523365 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.523378 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:31Z","lastTransitionTime":"2026-01-29T16:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.542056 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.557501 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.576509 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.593647 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.608392 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.621532 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.625988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.626027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.626039 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.626057 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.626067 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:31Z","lastTransitionTime":"2026-01-29T16:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.643588 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.662468 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.680539 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.698901 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.718078 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.729215 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.729276 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.729288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.729318 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.729330 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:31Z","lastTransitionTime":"2026-01-29T16:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.733576 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.751435 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.770963 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.790349 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.806413 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.832563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.832604 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.832624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.832650 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.832671 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:31Z","lastTransitionTime":"2026-01-29T16:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.838037 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:26Z\\\",\\\"message\\\":\\\" 6193 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:12:26.715800 6193 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:12:26.715819 6193 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:12:26.715837 6193 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:12:26.715869 6193 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:12:26.715880 6193 factory.go:656] Stopping watch factory\\\\nI0129 16:12:26.715883 6193 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:12:26.715867 6193 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:12:26.715914 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:12:26.715970 6193 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 16:12:26.715994 6193 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:12:26.716141 6193 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:28Z\\\",\\\"message\\\":\\\"],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 16:12:28.281623 6342 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0129 16:12:28.281664 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:12:28.281671 6342 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default ar\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.935476 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.935533 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.935549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.935575 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.935592 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:31Z","lastTransitionTime":"2026-01-29T16:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.971043 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:31 crc kubenswrapper[4895]: E0129 16:12:31.971262 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:31 crc kubenswrapper[4895]: E0129 16:12:31.971418 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs podName:5113e2b8-dc97-42a1-aa1c-3d604cada8c2 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:32.971379761 +0000 UTC m=+36.774357235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs") pod "network-metrics-daemon-h9mkw" (UID: "5113e2b8-dc97-42a1-aa1c-3d604cada8c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:31 crc kubenswrapper[4895]: I0129 16:12:31.987387 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:50:40.862389423 +0000 UTC Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.038286 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.039008 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.039066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.039101 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.039124 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.141394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.141458 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.141475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.141497 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.141511 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.245286 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.245355 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.245374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.245398 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.245415 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.348447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.348502 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.348517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.348539 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.348557 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.452078 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.452144 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.452159 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.452182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.452200 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.556384 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.556442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.556451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.556475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.556487 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.659823 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.659907 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.659922 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.659947 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.659971 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.763024 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.763083 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.763097 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.763119 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.763131 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.779745 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.780033 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:12:48.779990981 +0000 UTC m=+52.582968255 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.780174 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.780330 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.780389 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.780489 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.780562 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:48.780524566 +0000 UTC m=+52.583502020 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.780602 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:48.780588498 +0000 UTC m=+52.583566002 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.866755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.866820 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.866833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.866852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.866873 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.881638 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.881708 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.881936 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.882007 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.881956 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.882032 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.882056 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.882075 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.882172 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:48.882143884 +0000 UTC m=+52.685121178 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.882207 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:48.882192945 +0000 UTC m=+52.685170239 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.945957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.946034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.946046 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.946068 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.946082 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.969565 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.975446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.975491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.975503 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.975523 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.975538 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.982733 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.982945 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.983052 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs podName:5113e2b8-dc97-42a1-aa1c-3d604cada8c2 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:34.98302775 +0000 UTC m=+38.786005034 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs") pod "network-metrics-daemon-h9mkw" (UID: "5113e2b8-dc97-42a1-aa1c-3d604cada8c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.988309 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:54:05.818500472 +0000 UTC Jan 29 16:12:32 crc kubenswrapper[4895]: E0129 16:12:32.992162 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.996984 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.997033 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.997047 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.997069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:32 crc kubenswrapper[4895]: I0129 16:12:32.997083 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:32Z","lastTransitionTime":"2026-01-29T16:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: E0129 16:12:33.014158 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.019538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.019586 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.019599 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.019624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.019635 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: E0129 16:12:33.034336 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.036541 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.038020 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.038034 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.038400 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:33 crc kubenswrapper[4895]: E0129 16:12:33.038502 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:33 crc kubenswrapper[4895]: E0129 16:12:33.038551 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:33 crc kubenswrapper[4895]: E0129 16:12:33.039392 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:33 crc kubenswrapper[4895]: E0129 16:12:33.039552 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.041093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.041255 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.041348 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.041447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.041534 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: E0129 16:12:33.060126 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:33 crc kubenswrapper[4895]: E0129 16:12:33.060304 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.062145 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.062254 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.062333 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.062407 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.062477 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.165652 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.166183 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.166337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.166504 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.166640 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.270443 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.270524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.270541 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.270569 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.270590 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.373655 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.373703 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.373716 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.373735 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.373750 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.477044 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.477122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.477140 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.477168 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.477185 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.580390 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.580598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.580632 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.580681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.580716 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.684349 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.684382 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.684392 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.684406 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.684417 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.787850 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.787940 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.787953 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.787972 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.787983 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.891553 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.891631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.891649 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.891677 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.891697 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.989078 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:36:27.663462346 +0000 UTC Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.994961 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.995015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.995033 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.995058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:33 crc kubenswrapper[4895]: I0129 16:12:33.995078 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:33Z","lastTransitionTime":"2026-01-29T16:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.097962 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.098001 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.098042 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.098094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.098113 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:34Z","lastTransitionTime":"2026-01-29T16:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.200476 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.200530 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.200545 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.200571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.200591 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:34Z","lastTransitionTime":"2026-01-29T16:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.303740 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.303815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.303826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.303849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.303866 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:34Z","lastTransitionTime":"2026-01-29T16:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.406854 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.406968 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.406992 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.407021 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.407042 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:34Z","lastTransitionTime":"2026-01-29T16:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.510262 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.510330 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.510352 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.510383 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.510404 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:34Z","lastTransitionTime":"2026-01-29T16:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.614737 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.614817 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.614841 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.614870 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.614932 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:34Z","lastTransitionTime":"2026-01-29T16:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.718742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.718816 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.718843 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.718901 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.718925 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:34Z","lastTransitionTime":"2026-01-29T16:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.822087 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.822152 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.822165 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.822186 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.822200 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:34Z","lastTransitionTime":"2026-01-29T16:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.925615 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.925687 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.925705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.925729 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.925747 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:34Z","lastTransitionTime":"2026-01-29T16:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:34 crc kubenswrapper[4895]: I0129 16:12:34.989264 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:41:10.489883974 +0000 UTC Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.009095 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:35 crc kubenswrapper[4895]: E0129 16:12:35.009275 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:35 crc kubenswrapper[4895]: E0129 16:12:35.009370 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs podName:5113e2b8-dc97-42a1-aa1c-3d604cada8c2 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:39.009345934 +0000 UTC m=+42.812323198 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs") pod "network-metrics-daemon-h9mkw" (UID: "5113e2b8-dc97-42a1-aa1c-3d604cada8c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.029614 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.029666 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.029682 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.029707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.029725 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:35Z","lastTransitionTime":"2026-01-29T16:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.036760 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.036911 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.036926 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:35 crc kubenswrapper[4895]: E0129 16:12:35.037118 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.037178 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:35 crc kubenswrapper[4895]: E0129 16:12:35.037252 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:35 crc kubenswrapper[4895]: E0129 16:12:35.037361 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:35 crc kubenswrapper[4895]: E0129 16:12:35.037537 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.133162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.133202 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.133214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.133233 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.133244 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:35Z","lastTransitionTime":"2026-01-29T16:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.236611 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.236676 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.236697 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.236730 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.236754 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:35Z","lastTransitionTime":"2026-01-29T16:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.339599 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.339671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.339686 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.339709 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.339729 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:35Z","lastTransitionTime":"2026-01-29T16:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.442832 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.442915 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.442954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.442974 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.442986 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:35Z","lastTransitionTime":"2026-01-29T16:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.545845 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.545912 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.545944 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.545965 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.545977 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:35Z","lastTransitionTime":"2026-01-29T16:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.649902 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.650012 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.650026 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.650046 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.650061 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:35Z","lastTransitionTime":"2026-01-29T16:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.753124 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.753167 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.753181 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.753219 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.753233 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:35Z","lastTransitionTime":"2026-01-29T16:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.855462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.855515 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.855524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.855539 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.855549 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:35Z","lastTransitionTime":"2026-01-29T16:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.963639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.963687 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.963697 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.963714 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.963726 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:35Z","lastTransitionTime":"2026-01-29T16:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:35 crc kubenswrapper[4895]: I0129 16:12:35.990185 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:23:46.612304773 +0000 UTC Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.066453 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.066495 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.066506 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.066522 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.066533 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:36Z","lastTransitionTime":"2026-01-29T16:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.092063 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.108406 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.127732 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.144599 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.158065 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.169286 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.169318 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.169335 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.169360 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.169374 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:36Z","lastTransitionTime":"2026-01-29T16:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.171507 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.201143 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:26Z\\\",\\\"message\\\":\\\" 6193 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:12:26.715800 6193 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:12:26.715819 6193 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:12:26.715837 6193 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:12:26.715869 6193 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:12:26.715880 6193 factory.go:656] Stopping watch factory\\\\nI0129 16:12:26.715883 6193 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:12:26.715867 6193 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:12:26.715914 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:12:26.715970 6193 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 16:12:26.715994 6193 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:12:26.716141 6193 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:28Z\\\",\\\"message\\\":\\\"],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 16:12:28.281623 6342 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0129 16:12:28.281664 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:12:28.281671 6342 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default ar\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.222714 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.239292 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.257034 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.273215 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.273274 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.273286 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.273307 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.273322 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:36Z","lastTransitionTime":"2026-01-29T16:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.276501 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.290614 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.309594 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.330407 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.352160 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.369108 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.376350 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.376394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.376411 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.376433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.376450 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:36Z","lastTransitionTime":"2026-01-29T16:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.389688 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.410001 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.479006 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.479069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.479086 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.479108 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.479124 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:36Z","lastTransitionTime":"2026-01-29T16:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.582529 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.582585 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.582598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.582616 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.582630 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:36Z","lastTransitionTime":"2026-01-29T16:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.686230 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.686321 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.686345 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.686379 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.686405 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:36Z","lastTransitionTime":"2026-01-29T16:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.815661 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.815720 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.815735 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.815758 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.815774 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:36Z","lastTransitionTime":"2026-01-29T16:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.919670 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.920074 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.920212 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.920376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.920545 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:36Z","lastTransitionTime":"2026-01-29T16:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:36 crc kubenswrapper[4895]: I0129 16:12:36.991213 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:20:23.012707906 +0000 UTC Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.024084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.024153 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.024168 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.024191 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.024208 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:37Z","lastTransitionTime":"2026-01-29T16:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.036106 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.036257 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.036397 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:37 crc kubenswrapper[4895]: E0129 16:12:37.036495 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.036595 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:37 crc kubenswrapper[4895]: E0129 16:12:37.036802 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:37 crc kubenswrapper[4895]: E0129 16:12:37.036977 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:37 crc kubenswrapper[4895]: E0129 16:12:37.037093 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.062566 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.101432 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.125440 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.129049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.129104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.129117 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.129138 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.129150 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:37Z","lastTransitionTime":"2026-01-29T16:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.147468 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.167207 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.185866 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.199813 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.217743 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.231964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.232028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.232040 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.232066 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.232079 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:37Z","lastTransitionTime":"2026-01-29T16:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.234245 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.252462 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.270366 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.286832 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.330149 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6f34a8bf555aab744752cd21526b8fb85106b305529a4af5aa69624a70da23a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:26Z\\\",\\\"message\\\":\\\" 6193 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:12:26.715800 6193 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 16:12:26.715819 6193 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:12:26.715837 6193 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:12:26.715869 6193 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:12:26.715880 6193 factory.go:656] Stopping watch factory\\\\nI0129 16:12:26.715883 6193 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:12:26.715867 6193 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:12:26.715914 6193 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:12:26.715970 6193 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 16:12:26.715994 6193 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:12:26.716141 6193 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:28Z\\\",\\\"message\\\":\\\"],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 16:12:28.281623 6342 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0129 16:12:28.281664 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:12:28.281671 6342 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default ar\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.337038 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.337100 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.337117 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.337141 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.337157 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:37Z","lastTransitionTime":"2026-01-29T16:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.365339 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.383157 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.399460 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.417079 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.440595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.440641 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.440653 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.440673 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.440685 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:37Z","lastTransitionTime":"2026-01-29T16:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.544100 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.544147 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.544157 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.544176 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.544188 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:37Z","lastTransitionTime":"2026-01-29T16:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.652350 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.652790 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.652995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.653146 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.653241 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:37Z","lastTransitionTime":"2026-01-29T16:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.756741 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.756787 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.756797 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.756816 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.756829 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:37Z","lastTransitionTime":"2026-01-29T16:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.859565 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.859662 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.859677 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.859696 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.859714 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:37Z","lastTransitionTime":"2026-01-29T16:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.963445 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.963523 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.963540 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.963564 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.963582 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:37Z","lastTransitionTime":"2026-01-29T16:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:37 crc kubenswrapper[4895]: I0129 16:12:37.991892 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:56:11.431628379 +0000 UTC Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.076322 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.076386 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.076402 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.076424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.076440 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:38Z","lastTransitionTime":"2026-01-29T16:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.179782 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.179844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.179913 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.179949 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.179974 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:38Z","lastTransitionTime":"2026-01-29T16:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.282935 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.283003 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.283019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.283043 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.283062 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:38Z","lastTransitionTime":"2026-01-29T16:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.386173 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.386279 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.386297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.386324 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.386359 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:38Z","lastTransitionTime":"2026-01-29T16:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.489619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.489709 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.489727 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.489756 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.489775 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:38Z","lastTransitionTime":"2026-01-29T16:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.593419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.593495 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.593520 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.593552 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.593575 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:38Z","lastTransitionTime":"2026-01-29T16:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.696466 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.696538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.696559 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.696588 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.696609 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:38Z","lastTransitionTime":"2026-01-29T16:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.800258 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.800322 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.800331 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.800354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.800365 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:38Z","lastTransitionTime":"2026-01-29T16:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.902860 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.902993 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.903023 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.903053 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.903075 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:38Z","lastTransitionTime":"2026-01-29T16:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:38 crc kubenswrapper[4895]: I0129 16:12:38.993162 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:00:51.443133915 +0000 UTC Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.007624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.007710 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.007727 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.007752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.007771 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:39Z","lastTransitionTime":"2026-01-29T16:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.036184 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.036229 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.036399 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:39 crc kubenswrapper[4895]: E0129 16:12:39.036550 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.036587 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:39 crc kubenswrapper[4895]: E0129 16:12:39.036783 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:39 crc kubenswrapper[4895]: E0129 16:12:39.037033 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:39 crc kubenswrapper[4895]: E0129 16:12:39.037117 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.086478 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:39 crc kubenswrapper[4895]: E0129 16:12:39.086719 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:39 crc kubenswrapper[4895]: E0129 16:12:39.086855 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs podName:5113e2b8-dc97-42a1-aa1c-3d604cada8c2 nodeName:}" failed. No retries permitted until 2026-01-29 16:12:47.086822094 +0000 UTC m=+50.889799558 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs") pod "network-metrics-daemon-h9mkw" (UID: "5113e2b8-dc97-42a1-aa1c-3d604cada8c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.111919 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.111983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.111994 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.112013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.112027 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:39Z","lastTransitionTime":"2026-01-29T16:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.216211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.216273 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.216287 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.216309 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.216326 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:39Z","lastTransitionTime":"2026-01-29T16:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.320214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.320287 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.320305 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.320333 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.320352 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:39Z","lastTransitionTime":"2026-01-29T16:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.422801 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.422859 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.422898 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.422922 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.422936 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:39Z","lastTransitionTime":"2026-01-29T16:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.525929 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.526005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.526030 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.526061 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.526085 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:39Z","lastTransitionTime":"2026-01-29T16:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.629253 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.629325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.629337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.629363 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.629379 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:39Z","lastTransitionTime":"2026-01-29T16:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.732510 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.732558 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.732589 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.732606 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.732617 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:39Z","lastTransitionTime":"2026-01-29T16:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.836276 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.836363 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.836389 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.836424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.836493 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:39Z","lastTransitionTime":"2026-01-29T16:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.939584 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.939663 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.939687 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.939717 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.939737 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:39Z","lastTransitionTime":"2026-01-29T16:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:39 crc kubenswrapper[4895]: I0129 16:12:39.994380 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:15:06.437486389 +0000 UTC Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.043040 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.043107 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.043126 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.043148 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.043161 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:40Z","lastTransitionTime":"2026-01-29T16:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.146403 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.146467 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.146479 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.146503 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.146517 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:40Z","lastTransitionTime":"2026-01-29T16:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.249693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.249742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.249752 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.249771 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.249787 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:40Z","lastTransitionTime":"2026-01-29T16:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.353237 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.353291 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.353302 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.353325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.353337 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:40Z","lastTransitionTime":"2026-01-29T16:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.456776 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.456846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.456885 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.456913 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.456931 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:40Z","lastTransitionTime":"2026-01-29T16:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.560810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.560917 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.560933 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.560957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.560971 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:40Z","lastTransitionTime":"2026-01-29T16:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.663602 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.663681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.663700 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.663729 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.663750 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:40Z","lastTransitionTime":"2026-01-29T16:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.683844 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.685076 4895 scope.go:117] "RemoveContainer" containerID="2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.707375 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.729924 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.759081 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.766622 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.766666 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.766687 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.766708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.766721 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:40Z","lastTransitionTime":"2026-01-29T16:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.777114 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.796764 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.812094 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.826054 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.839989 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.855515 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.870622 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.870688 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.870710 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.870739 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.870759 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:40Z","lastTransitionTime":"2026-01-29T16:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.877824 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.895233 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.909849 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.929348 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:28Z\\\",\\\"message\\\":\\\"],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 16:12:28.281623 6342 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0129 16:12:28.281664 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:12:28.281671 6342 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default ar\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.942349 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.954638 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.971227 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.973379 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.973422 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.973439 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.973462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.973477 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:40Z","lastTransitionTime":"2026-01-29T16:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.993653 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:40 crc kubenswrapper[4895]: I0129 16:12:40.994596 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:19:36.626360139 +0000 UTC Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.036528 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.036556 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:41 crc kubenswrapper[4895]: E0129 16:12:41.036719 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.036786 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:41 crc kubenswrapper[4895]: E0129 16:12:41.036985 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:41 crc kubenswrapper[4895]: E0129 16:12:41.037158 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.037196 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:41 crc kubenswrapper[4895]: E0129 16:12:41.037319 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.076520 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.076591 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.076602 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.076620 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.076634 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:41Z","lastTransitionTime":"2026-01-29T16:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.179478 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.179524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.179537 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.179562 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.179576 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:41Z","lastTransitionTime":"2026-01-29T16:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.282708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.282766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.282782 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.282804 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.282819 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:41Z","lastTransitionTime":"2026-01-29T16:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.385134 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.385177 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.385189 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.385223 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.385234 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:41Z","lastTransitionTime":"2026-01-29T16:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.419311 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/1.log" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.422274 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34"} Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.422696 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.438753 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.455242 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.478919 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:28Z\\\",\\\"message\\\":\\\"],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 16:12:28.281623 6342 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0129 16:12:28.281664 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:12:28.281671 6342 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default ar\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.487767 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.487843 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.487857 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.487903 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.487915 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:41Z","lastTransitionTime":"2026-01-29T16:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.497689 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.512525 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.531495 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.552368 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.567117 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.580672 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.590225 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.590299 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.590321 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.590347 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.590367 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:41Z","lastTransitionTime":"2026-01-29T16:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.599855 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.624205 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.640558 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.655806 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.667975 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.683552 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.693528 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.693585 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.693598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.693619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.693633 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:41Z","lastTransitionTime":"2026-01-29T16:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.710094 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.727438 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.797051 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.797140 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.797158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.797219 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.797236 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:41Z","lastTransitionTime":"2026-01-29T16:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.900461 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.900511 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.900524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.900541 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.900552 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:41Z","lastTransitionTime":"2026-01-29T16:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:41 crc kubenswrapper[4895]: I0129 16:12:41.995346 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:26:58.13517198 +0000 UTC Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.003624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.003943 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.004031 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.004122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.004205 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:42Z","lastTransitionTime":"2026-01-29T16:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.107342 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.107416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.107428 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.107452 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.107466 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:42Z","lastTransitionTime":"2026-01-29T16:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.210794 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.210866 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.210899 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.210921 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.210934 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:42Z","lastTransitionTime":"2026-01-29T16:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.314936 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.315037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.315061 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.315096 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.315127 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:42Z","lastTransitionTime":"2026-01-29T16:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.417938 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.417981 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.417991 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.418006 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.418017 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:42Z","lastTransitionTime":"2026-01-29T16:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.428184 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/2.log" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.429413 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/1.log" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.433800 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34" exitCode=1 Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.433886 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34"} Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.433954 4895 scope.go:117] "RemoveContainer" containerID="2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.434605 4895 scope.go:117] "RemoveContainer" containerID="ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34" Jan 29 16:12:42 crc kubenswrapper[4895]: E0129 16:12:42.434807 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.452015 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.471924 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.495515 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.521214 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.521995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.522062 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.522076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.522103 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.522116 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:42Z","lastTransitionTime":"2026-01-29T16:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.540943 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.554566 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.569515 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.583973 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.605774 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.619729 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.625121 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.625162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.625176 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.625198 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.625210 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:42Z","lastTransitionTime":"2026-01-29T16:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.635964 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.651400 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.675062 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b783932d916764d7a94048d27139cfeebbd668c4c90a421030ece235c51ac8e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:28Z\\\",\\\"message\\\":\\\"],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 16:12:28.281623 6342 services_controller.go:356] Processing sync for service openshift-multus/multus-admission-controller for network=default\\\\nF0129 16:12:28.281664 6342 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for banpInformer during admin network policy controller initialization, handler {0x1fcc300 0x1fcbfe0 0x1fcbf80} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:28Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:12:28.281671 6342 lb_config.go:1031] Cluster endpoints for openshift-image-registry/image-registry for network=default ar\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:41Z\\\",\\\"message\\\":\\\"manager/package-server-manager-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.110\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:12:41.627066 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 16:12:41.627086 6535 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:12:41.627085 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-lqtb8\\\\nF0129 16:12:41.627094 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.691158 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.705239 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.722681 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.728700 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.728756 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.728766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.728783 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.728798 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:42Z","lastTransitionTime":"2026-01-29T16:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.741472 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.832612 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.832676 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.832696 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.832724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.832739 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:42Z","lastTransitionTime":"2026-01-29T16:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.936625 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.936700 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.936717 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.936747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.936775 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:42Z","lastTransitionTime":"2026-01-29T16:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:42 crc kubenswrapper[4895]: I0129 16:12:42.995958 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:37:47.827157017 +0000 UTC Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.036586 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.036660 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.036668 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:43 crc kubenswrapper[4895]: E0129 16:12:43.036767 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.036942 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:43 crc kubenswrapper[4895]: E0129 16:12:43.037103 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:43 crc kubenswrapper[4895]: E0129 16:12:43.037230 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:43 crc kubenswrapper[4895]: E0129 16:12:43.037370 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.039926 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.039998 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.040019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.040044 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.040063 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.143325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.143379 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.143391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.143411 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.143425 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.198327 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.198376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.198388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.198406 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.198418 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: E0129 16:12:43.215696 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.220271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.220328 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.220338 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.220358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.220371 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: E0129 16:12:43.240183 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.245907 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.245956 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.245967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.245990 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.246001 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: E0129 16:12:43.258847 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.263002 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.263063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.263074 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.263092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.263105 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: E0129 16:12:43.276498 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.280755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.280812 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.280824 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.280844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.280859 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: E0129 16:12:43.294695 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: E0129 16:12:43.294829 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.296926 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.296965 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.296978 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.296998 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.297013 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.400537 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.400597 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.400617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.400648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.400687 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.440257 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/2.log" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.446625 4895 scope.go:117] "RemoveContainer" containerID="ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34" Jan 29 16:12:43 crc kubenswrapper[4895]: E0129 16:12:43.446815 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.470030 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.488235 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.503675 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.503977 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.504047 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.504113 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.504231 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.508600 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.523573 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.538866 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.562938 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.579209 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.598145 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.607340 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.607656 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.607812 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.607948 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.608035 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.617465 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.643717 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:41Z\\\",\\\"message\\\":\\\"manager/package-server-manager-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.110\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:12:41.627066 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 16:12:41.627086 6535 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:12:41.627085 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-lqtb8\\\\nF0129 16:12:41.627094 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.661185 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.676392 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.693425 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.707704 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.710745 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.710795 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.710808 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.710830 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.710842 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.722392 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.738947 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.757440 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.814489 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.814543 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.814554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.814571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.814584 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.918751 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.918832 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.918848 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.918904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.918923 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:43Z","lastTransitionTime":"2026-01-29T16:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:43 crc kubenswrapper[4895]: I0129 16:12:43.996266 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:50:04.171387488 +0000 UTC Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.021513 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.021576 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.021590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.021615 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.021630 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:44Z","lastTransitionTime":"2026-01-29T16:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.125518 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.125599 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.125619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.125648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.125669 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:44Z","lastTransitionTime":"2026-01-29T16:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.231008 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.231122 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.231139 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.231163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.231202 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:44Z","lastTransitionTime":"2026-01-29T16:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.335515 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.335582 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.335596 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.335619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.335634 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:44Z","lastTransitionTime":"2026-01-29T16:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.439089 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.439150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.439163 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.439188 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.439534 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:44Z","lastTransitionTime":"2026-01-29T16:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.543208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.543279 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.543299 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.543325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.543345 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:44Z","lastTransitionTime":"2026-01-29T16:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.646476 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.646560 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.646577 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.646599 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.646615 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:44Z","lastTransitionTime":"2026-01-29T16:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.749919 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.749976 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.749987 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.750007 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.750019 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:44Z","lastTransitionTime":"2026-01-29T16:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.852914 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.852973 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.852988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.853012 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.853027 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:44Z","lastTransitionTime":"2026-01-29T16:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.956766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.956837 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.956847 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.956896 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.956909 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:44Z","lastTransitionTime":"2026-01-29T16:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:44 crc kubenswrapper[4895]: I0129 16:12:44.997434 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:42:50.043751652 +0000 UTC Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.035783 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:45 crc kubenswrapper[4895]: E0129 16:12:45.035987 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.036153 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.036313 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:45 crc kubenswrapper[4895]: E0129 16:12:45.036416 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.036481 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:45 crc kubenswrapper[4895]: E0129 16:12:45.036595 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:45 crc kubenswrapper[4895]: E0129 16:12:45.036732 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.060647 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.060726 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.060751 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.060781 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.060803 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:45Z","lastTransitionTime":"2026-01-29T16:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.164384 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.164427 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.164437 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.164458 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.164471 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:45Z","lastTransitionTime":"2026-01-29T16:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.267581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.267629 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.267642 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.267660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.267669 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:45Z","lastTransitionTime":"2026-01-29T16:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.371146 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.371198 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.371214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.371241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.371258 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:45Z","lastTransitionTime":"2026-01-29T16:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.473363 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.473420 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.473433 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.473453 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.473465 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:45Z","lastTransitionTime":"2026-01-29T16:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.576811 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.576948 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.576983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.577022 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.577044 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:45Z","lastTransitionTime":"2026-01-29T16:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.680634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.680701 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.680720 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.680748 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.680766 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:45Z","lastTransitionTime":"2026-01-29T16:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.784081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.784132 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.784143 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.784162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.784174 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:45Z","lastTransitionTime":"2026-01-29T16:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.887893 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.887954 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.887967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.887990 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.888005 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:45Z","lastTransitionTime":"2026-01-29T16:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.942353 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.954296 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.963074 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.982931 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.991167 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.991211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.991228 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.991256 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.991276 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:45Z","lastTransitionTime":"2026-01-29T16:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:45 crc kubenswrapper[4895]: I0129 16:12:45.997605 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:35:34.584902624 +0000 UTC Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.001570 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.017102 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.033166 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.047855 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.063744 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.092352 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.101985 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.102097 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.102125 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.102164 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.102191 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:46Z","lastTransitionTime":"2026-01-29T16:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.118788 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.137429 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.159995 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:41Z\\\",\\\"message\\\":\\\"manager/package-server-manager-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.110\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:12:41.627066 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 16:12:41.627086 6535 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:12:41.627085 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-lqtb8\\\\nF0129 16:12:41.627094 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.174691 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.189626 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.205547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.205616 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.205636 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.205664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.205683 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:46Z","lastTransitionTime":"2026-01-29T16:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.207570 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.231301 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.247443 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.263965 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.308840 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.308972 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.308992 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.309031 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.309054 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:46Z","lastTransitionTime":"2026-01-29T16:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.412009 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.412069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.412081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.412098 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.412109 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:46Z","lastTransitionTime":"2026-01-29T16:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.515854 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.515927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.515946 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.515969 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.515982 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:46Z","lastTransitionTime":"2026-01-29T16:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.620347 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.620399 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.620412 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.620432 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.620446 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:46Z","lastTransitionTime":"2026-01-29T16:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.723471 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.723536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.723554 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.723581 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.723599 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:46Z","lastTransitionTime":"2026-01-29T16:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.826538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.826587 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.826598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.826618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.826635 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:46Z","lastTransitionTime":"2026-01-29T16:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.930205 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.930258 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.930272 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.930293 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.930307 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:46Z","lastTransitionTime":"2026-01-29T16:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:46 crc kubenswrapper[4895]: I0129 16:12:46.998308 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 06:29:54.092152159 +0000 UTC Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.034034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.034106 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.034119 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.034142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.034161 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:47Z","lastTransitionTime":"2026-01-29T16:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.036670 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.036715 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.036732 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.036849 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:47 crc kubenswrapper[4895]: E0129 16:12:47.036988 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:47 crc kubenswrapper[4895]: E0129 16:12:47.037117 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:47 crc kubenswrapper[4895]: E0129 16:12:47.037231 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:47 crc kubenswrapper[4895]: E0129 16:12:47.037316 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.056598 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.073180 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.092824 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.128006 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.136067 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.136117 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.136129 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.136149 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.136167 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:47Z","lastTransitionTime":"2026-01-29T16:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.150548 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:47 crc kubenswrapper[4895]: E0129 16:12:47.150733 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:47 crc kubenswrapper[4895]: E0129 16:12:47.150824 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs podName:5113e2b8-dc97-42a1-aa1c-3d604cada8c2 nodeName:}" failed. No retries permitted until 2026-01-29 16:13:03.150803132 +0000 UTC m=+66.953780476 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs") pod "network-metrics-daemon-h9mkw" (UID: "5113e2b8-dc97-42a1-aa1c-3d604cada8c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.153368 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.172539 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54915014-84f4-4c65-ad36-11049c97a3c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68014e55c516cafe98fd65b5e162f27e8abb4af9675a2aaf6fecb16377fb35e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95fd8ff11220a00309543179b8c46c74ab7fd2f75a54ce16541ed217121747c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b587eb8a168e54cd2c1bef70157580a7d20cd8e378311a33996f3867de251ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.191607 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.206802 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.229708 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.239583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.239634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.239648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.239671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.239689 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:47Z","lastTransitionTime":"2026-01-29T16:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.244133 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.261278 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.277362 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.292607 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.308984 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.331390 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:41Z\\\",\\\"message\\\":\\\"manager/package-server-manager-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.110\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:12:41.627066 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 16:12:41.627086 6535 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:12:41.627085 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-lqtb8\\\\nF0129 16:12:41.627094 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.342599 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.342930 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.343031 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.343146 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.343307 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:47Z","lastTransitionTime":"2026-01-29T16:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.353693 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.372773 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.393928 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.446143 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.446186 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.446196 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.446214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.446225 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:47Z","lastTransitionTime":"2026-01-29T16:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.549282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.549336 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.549345 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.549367 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.549378 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:47Z","lastTransitionTime":"2026-01-29T16:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.653084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.653142 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.653153 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.653175 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.653189 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:47Z","lastTransitionTime":"2026-01-29T16:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.755700 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.755766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.755777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.755799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.755813 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:47Z","lastTransitionTime":"2026-01-29T16:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.858969 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.859037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.859049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.859073 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.859088 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:47Z","lastTransitionTime":"2026-01-29T16:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.962784 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.962852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.962898 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.962924 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.962942 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:47Z","lastTransitionTime":"2026-01-29T16:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:47 crc kubenswrapper[4895]: I0129 16:12:47.999278 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:48:54.961126305 +0000 UTC Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.066208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.066271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.066282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.066303 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.066320 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:48Z","lastTransitionTime":"2026-01-29T16:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.170403 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.170537 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.170562 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.170587 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.170603 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:48Z","lastTransitionTime":"2026-01-29T16:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.274039 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.274105 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.274153 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.274184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.274208 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:48Z","lastTransitionTime":"2026-01-29T16:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.377959 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.378278 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.378302 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.378323 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.378335 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:48Z","lastTransitionTime":"2026-01-29T16:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.481646 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.481725 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.481743 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.481776 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.481797 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:48Z","lastTransitionTime":"2026-01-29T16:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.584747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.584784 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.584793 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.584808 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.584818 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:48Z","lastTransitionTime":"2026-01-29T16:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.688264 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.688316 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.688328 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.688346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.688357 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:48Z","lastTransitionTime":"2026-01-29T16:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.791166 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.791232 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.791248 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.791271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.791290 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:48Z","lastTransitionTime":"2026-01-29T16:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.867741 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.867934 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.868039 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.868101 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:13:20.868059256 +0000 UTC m=+84.671036560 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.868179 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.868198 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.868284 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:13:20.868261322 +0000 UTC m=+84.671238626 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.868339 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:13:20.868307793 +0000 UTC m=+84.671285087 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.895290 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.895356 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.895370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.895392 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.895407 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:48Z","lastTransitionTime":"2026-01-29T16:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.968804 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.968937 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.969217 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.969247 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.969247 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.969318 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.969267 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.969414 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:13:20.969391625 +0000 UTC m=+84.772368929 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.969335 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:48 crc kubenswrapper[4895]: E0129 16:12:48.969538 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:13:20.969513158 +0000 UTC m=+84.772490532 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.999327 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.999389 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:48 crc kubenswrapper[4895]: I0129 16:12:48.999406 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:48.999430 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:48.999448 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:48Z","lastTransitionTime":"2026-01-29T16:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:48.999493 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:05:43.525237094 +0000 UTC Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.036163 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.036273 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.036367 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.036590 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:49 crc kubenswrapper[4895]: E0129 16:12:49.036605 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:49 crc kubenswrapper[4895]: E0129 16:12:49.036780 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:49 crc kubenswrapper[4895]: E0129 16:12:49.036984 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:49 crc kubenswrapper[4895]: E0129 16:12:49.037170 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.103182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.103243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.103261 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.103287 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.103305 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:49Z","lastTransitionTime":"2026-01-29T16:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.206637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.206709 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.206733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.206763 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.206789 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:49Z","lastTransitionTime":"2026-01-29T16:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.309682 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.309715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.309726 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.309748 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.309760 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:49Z","lastTransitionTime":"2026-01-29T16:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.412823 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.412949 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.412975 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.413006 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.413028 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:49Z","lastTransitionTime":"2026-01-29T16:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.516995 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.517083 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.517120 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.517159 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.517197 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:49Z","lastTransitionTime":"2026-01-29T16:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.620318 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.620376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.620387 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.620411 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.620424 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:49Z","lastTransitionTime":"2026-01-29T16:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.722943 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.723027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.723046 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.723075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.723095 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:49Z","lastTransitionTime":"2026-01-29T16:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.826700 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.826775 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.826801 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.826837 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.826910 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:49Z","lastTransitionTime":"2026-01-29T16:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.930896 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.930964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.930984 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.931011 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:49 crc kubenswrapper[4895]: I0129 16:12:49.931033 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:49Z","lastTransitionTime":"2026-01-29T16:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.000272 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:59:36.753971001 +0000 UTC Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.035060 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.035110 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.035121 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.035138 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.035149 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:50Z","lastTransitionTime":"2026-01-29T16:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.138706 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.138805 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.138819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.138847 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.138893 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:50Z","lastTransitionTime":"2026-01-29T16:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.241943 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.242028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.242041 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.242069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.242082 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:50Z","lastTransitionTime":"2026-01-29T16:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.345013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.345085 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.345105 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.345135 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.345154 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:50Z","lastTransitionTime":"2026-01-29T16:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.448539 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.448605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.448618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.448645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.448658 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:50Z","lastTransitionTime":"2026-01-29T16:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.551987 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.552061 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.552076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.552102 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.552119 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:50Z","lastTransitionTime":"2026-01-29T16:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.655370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.655964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.656104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.656299 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.656452 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:50Z","lastTransitionTime":"2026-01-29T16:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.759506 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.759626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.759640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.759662 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.759675 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:50Z","lastTransitionTime":"2026-01-29T16:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.864366 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.864443 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.864463 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.864492 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.864513 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:50Z","lastTransitionTime":"2026-01-29T16:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.967380 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.967445 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.967462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.967486 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:50 crc kubenswrapper[4895]: I0129 16:12:50.967503 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:50Z","lastTransitionTime":"2026-01-29T16:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.001629 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:38:45.218216713 +0000 UTC Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.036205 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.036339 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:51 crc kubenswrapper[4895]: E0129 16:12:51.036382 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.036338 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:51 crc kubenswrapper[4895]: E0129 16:12:51.036546 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:51 crc kubenswrapper[4895]: E0129 16:12:51.036725 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.036784 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:51 crc kubenswrapper[4895]: E0129 16:12:51.036847 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.070553 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.070634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.070659 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.070690 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.070715 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:51Z","lastTransitionTime":"2026-01-29T16:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.174548 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.174603 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.174618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.174637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.174647 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:51Z","lastTransitionTime":"2026-01-29T16:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.277853 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.278247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.278271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.278291 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.278305 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:51Z","lastTransitionTime":"2026-01-29T16:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.382423 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.382491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.382511 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.382538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.382559 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:51Z","lastTransitionTime":"2026-01-29T16:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.484968 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.485044 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.485065 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.485092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.485108 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:51Z","lastTransitionTime":"2026-01-29T16:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.588002 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.588053 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.588071 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.588096 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.588115 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:51Z","lastTransitionTime":"2026-01-29T16:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.692235 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.692306 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.692319 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.692341 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.692360 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:51Z","lastTransitionTime":"2026-01-29T16:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.794595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.794651 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.794670 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.794697 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.794718 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:51Z","lastTransitionTime":"2026-01-29T16:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.898418 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.899063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.899093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.899131 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:51 crc kubenswrapper[4895]: I0129 16:12:51.899153 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:51Z","lastTransitionTime":"2026-01-29T16:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.002179 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:07:07.301345535 +0000 UTC Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.002980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.003042 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.003058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.003084 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.003128 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:52Z","lastTransitionTime":"2026-01-29T16:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.106207 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.106272 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.106288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.106312 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.106327 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:52Z","lastTransitionTime":"2026-01-29T16:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.209101 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.209158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.209172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.209191 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.209204 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:52Z","lastTransitionTime":"2026-01-29T16:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.312095 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.312161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.312176 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.312200 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.312215 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:52Z","lastTransitionTime":"2026-01-29T16:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.415964 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.416009 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.416021 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.416038 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.416050 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:52Z","lastTransitionTime":"2026-01-29T16:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.519206 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.519660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.519753 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.519862 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.520002 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:52Z","lastTransitionTime":"2026-01-29T16:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.623150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.623625 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.623772 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.623949 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.624092 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:52Z","lastTransitionTime":"2026-01-29T16:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.727318 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.727400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.727419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.727447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.727466 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:52Z","lastTransitionTime":"2026-01-29T16:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.831094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.831170 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.831188 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.831215 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.831239 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:52Z","lastTransitionTime":"2026-01-29T16:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.935212 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.935295 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.935323 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.935356 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:52 crc kubenswrapper[4895]: I0129 16:12:52.935379 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:52Z","lastTransitionTime":"2026-01-29T16:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.003022 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:22:53.220308657 +0000 UTC Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.035785 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.035978 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.035973 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:53 crc kubenswrapper[4895]: E0129 16:12:53.036234 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.036441 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:53 crc kubenswrapper[4895]: E0129 16:12:53.036663 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:53 crc kubenswrapper[4895]: E0129 16:12:53.036719 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:53 crc kubenswrapper[4895]: E0129 16:12:53.036905 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.038615 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.038666 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.038683 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.038707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.038720 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.143239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.143312 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.143332 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.143362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.143380 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.246797 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.246854 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.246910 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.246944 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.246969 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.351207 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.351288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.351306 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.351338 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.351358 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.454578 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.454650 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.454671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.454703 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.454723 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.492377 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.492462 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.492475 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.492495 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.492508 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: E0129 16:12:53.511938 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.517258 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.517308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.517318 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.517336 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.517347 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: E0129 16:12:53.533993 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.540164 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.540210 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.540226 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.540248 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.540261 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: E0129 16:12:53.558295 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.562696 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.562739 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.562754 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.562779 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.562795 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: E0129 16:12:53.579092 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.585521 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.585611 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.585629 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.585667 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.585696 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: E0129 16:12:53.600811 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:53 crc kubenswrapper[4895]: E0129 16:12:53.601176 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.603768 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.603825 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.603839 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.603859 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.603901 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.708276 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.708350 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.708376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.708411 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.708440 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.812184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.812237 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.812250 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.812271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.812286 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.915792 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.915845 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.915855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.915887 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:53 crc kubenswrapper[4895]: I0129 16:12:53.915899 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:53Z","lastTransitionTime":"2026-01-29T16:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.004086 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:18:08.167422901 +0000 UTC Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.018577 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.018630 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.018645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.018668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.018684 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:54Z","lastTransitionTime":"2026-01-29T16:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.121772 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.121840 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.121857 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.121913 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.121931 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:54Z","lastTransitionTime":"2026-01-29T16:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.225184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.225248 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.225263 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.225285 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.225302 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:54Z","lastTransitionTime":"2026-01-29T16:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.329766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.329842 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.329887 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.329920 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.329983 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:54Z","lastTransitionTime":"2026-01-29T16:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.433769 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.433905 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.433926 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.433951 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.433966 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:54Z","lastTransitionTime":"2026-01-29T16:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.537104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.537174 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.537193 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.537222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.537243 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:54Z","lastTransitionTime":"2026-01-29T16:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.641101 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.641485 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.641577 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.641694 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.641779 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:54Z","lastTransitionTime":"2026-01-29T16:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.745153 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.745227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.745251 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.745281 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.745304 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:54Z","lastTransitionTime":"2026-01-29T16:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.848002 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.848063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.848075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.848095 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.848107 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:54Z","lastTransitionTime":"2026-01-29T16:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.951436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.951483 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.951499 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.951529 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:54 crc kubenswrapper[4895]: I0129 16:12:54.951545 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:54Z","lastTransitionTime":"2026-01-29T16:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.005166 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:33:18.864753157 +0000 UTC Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.037212 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.037397 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.037423 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:55 crc kubenswrapper[4895]: E0129 16:12:55.038024 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.037574 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:55 crc kubenswrapper[4895]: E0129 16:12:55.038132 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:55 crc kubenswrapper[4895]: E0129 16:12:55.037756 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:55 crc kubenswrapper[4895]: E0129 16:12:55.038320 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.053855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.053927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.053938 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.053955 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.053969 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:55Z","lastTransitionTime":"2026-01-29T16:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.157190 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.157260 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.157274 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.157307 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.157327 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:55Z","lastTransitionTime":"2026-01-29T16:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.260734 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.260796 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.260814 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.260841 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.260858 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:55Z","lastTransitionTime":"2026-01-29T16:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.363996 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.364493 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.364701 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.364941 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.365127 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:55Z","lastTransitionTime":"2026-01-29T16:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.468577 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.468626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.468634 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.468651 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.468666 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:55Z","lastTransitionTime":"2026-01-29T16:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.572435 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.572516 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.572536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.572575 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.572598 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:55Z","lastTransitionTime":"2026-01-29T16:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.676576 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.676627 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.676640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.676660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.676698 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:55Z","lastTransitionTime":"2026-01-29T16:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.779846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.779947 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.779963 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.779989 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.780002 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:55Z","lastTransitionTime":"2026-01-29T16:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.884314 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.884391 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.884409 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.884438 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.884457 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:55Z","lastTransitionTime":"2026-01-29T16:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.988705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.988792 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.988811 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.988845 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:55 crc kubenswrapper[4895]: I0129 16:12:55.988905 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:55Z","lastTransitionTime":"2026-01-29T16:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.006195 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:44:35.243428476 +0000 UTC Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.091684 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.091735 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.091751 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.091771 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.091785 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:56Z","lastTransitionTime":"2026-01-29T16:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.194985 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.195069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.195088 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.195116 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.195134 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:56Z","lastTransitionTime":"2026-01-29T16:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.298991 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.299058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.299088 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.299113 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.299126 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:56Z","lastTransitionTime":"2026-01-29T16:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.402733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.402819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.402833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.402855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.402907 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:56Z","lastTransitionTime":"2026-01-29T16:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.505890 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.505957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.505980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.506009 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.506025 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:56Z","lastTransitionTime":"2026-01-29T16:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.608566 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.608637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.608651 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.608674 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.608685 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:56Z","lastTransitionTime":"2026-01-29T16:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.712962 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.713015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.713029 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.713050 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.713064 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:56Z","lastTransitionTime":"2026-01-29T16:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.815790 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.815832 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.815844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.815883 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.815895 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:56Z","lastTransitionTime":"2026-01-29T16:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.919269 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.919322 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.919335 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.919354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:56 crc kubenswrapper[4895]: I0129 16:12:56.919368 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:56Z","lastTransitionTime":"2026-01-29T16:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.006501 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:03:56.080025458 +0000 UTC Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.022797 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.022848 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.022888 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.022913 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.022925 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:57Z","lastTransitionTime":"2026-01-29T16:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.036175 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.036270 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.036314 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:57 crc kubenswrapper[4895]: E0129 16:12:57.036465 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.036601 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:57 crc kubenswrapper[4895]: E0129 16:12:57.036599 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:57 crc kubenswrapper[4895]: E0129 16:12:57.036785 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:57 crc kubenswrapper[4895]: E0129 16:12:57.036976 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.063629 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.100246 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.123268 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.125826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.125856 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.125879 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.125895 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.125905 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:57Z","lastTransitionTime":"2026-01-29T16:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.143899 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54915014-84f4-4c65-ad36-11049c97a3c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68014e55c516cafe98fd65b5e162f27e8abb4af9675a2aaf6fecb16377fb35e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95fd8ff11220a00309543179b8c46c74ab7fd2f75a54ce16541ed217121747c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b587eb8a168e54cd2c1bef70157580a7d20cd8e378311a33996f3867de251ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.163491 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.178818 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.200158 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.213894 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.229801 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.229851 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.229938 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.229963 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.229977 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:57Z","lastTransitionTime":"2026-01-29T16:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.232688 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.248246 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.266093 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.283562 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.300711 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.323291 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:41Z\\\",\\\"message\\\":\\\"manager/package-server-manager-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.110\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:12:41.627066 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 16:12:41.627086 6535 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:12:41.627085 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-lqtb8\\\\nF0129 16:12:41.627094 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.333076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.333137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.333151 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.333171 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.333184 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:57Z","lastTransitionTime":"2026-01-29T16:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.340368 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.355583 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.370824 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.386681 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:12:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.436137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.436217 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.436276 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.436301 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.436314 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:57Z","lastTransitionTime":"2026-01-29T16:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.540282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.540364 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.540387 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.540416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.540435 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:57Z","lastTransitionTime":"2026-01-29T16:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.644409 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.644951 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.645062 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.645172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.645260 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:57Z","lastTransitionTime":"2026-01-29T16:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.748465 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.749137 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.749182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.749208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.749223 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:57Z","lastTransitionTime":"2026-01-29T16:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.851685 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.851724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.851737 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.851753 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.851765 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:57Z","lastTransitionTime":"2026-01-29T16:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.955011 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.955114 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.955149 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.955184 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:57 crc kubenswrapper[4895]: I0129 16:12:57.955204 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:57Z","lastTransitionTime":"2026-01-29T16:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.008052 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 06:53:27.370257299 +0000 UTC Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.039563 4895 scope.go:117] "RemoveContainer" containerID="ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34" Jan 29 16:12:58 crc kubenswrapper[4895]: E0129 16:12:58.040095 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.058267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.058359 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.058386 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.058420 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.058445 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:58Z","lastTransitionTime":"2026-01-29T16:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.162207 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.162308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.162322 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.162347 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.162367 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:58Z","lastTransitionTime":"2026-01-29T16:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.265233 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.265348 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.265359 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.265375 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.265385 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:58Z","lastTransitionTime":"2026-01-29T16:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.367994 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.368049 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.368067 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.368086 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.368096 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:58Z","lastTransitionTime":"2026-01-29T16:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.470971 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.471022 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.471035 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.471203 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.471224 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:58Z","lastTransitionTime":"2026-01-29T16:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.579188 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.579605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.579626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.579665 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.579685 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:58Z","lastTransitionTime":"2026-01-29T16:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.683104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.683164 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.683177 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.683202 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.683216 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:58Z","lastTransitionTime":"2026-01-29T16:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.786738 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.787235 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.787400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.787563 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.787719 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:58Z","lastTransitionTime":"2026-01-29T16:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.891768 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.891806 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.891819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.891837 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.891849 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:58Z","lastTransitionTime":"2026-01-29T16:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.994511 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.994556 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.994566 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.994586 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:58 crc kubenswrapper[4895]: I0129 16:12:58.994597 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:58Z","lastTransitionTime":"2026-01-29T16:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.008972 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:59:27.804935034 +0000 UTC Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.036327 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.036386 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.036485 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:12:59 crc kubenswrapper[4895]: E0129 16:12:59.036580 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.036673 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:12:59 crc kubenswrapper[4895]: E0129 16:12:59.036862 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:12:59 crc kubenswrapper[4895]: E0129 16:12:59.037050 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:12:59 crc kubenswrapper[4895]: E0129 16:12:59.037215 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.097611 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.097690 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.097709 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.097738 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.097757 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:59Z","lastTransitionTime":"2026-01-29T16:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.203062 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.203192 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.203254 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.203290 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.203346 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:59Z","lastTransitionTime":"2026-01-29T16:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.306711 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.306753 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.306770 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.306788 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.306800 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:59Z","lastTransitionTime":"2026-01-29T16:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.409316 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.409372 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.409390 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.409415 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.409430 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:59Z","lastTransitionTime":"2026-01-29T16:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.512514 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.512593 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.512620 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.512661 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.512681 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:59Z","lastTransitionTime":"2026-01-29T16:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.616385 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.616449 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.616466 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.616491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.616509 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:59Z","lastTransitionTime":"2026-01-29T16:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.719797 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.719932 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.719956 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.719981 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.720000 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:59Z","lastTransitionTime":"2026-01-29T16:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.823383 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.823664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.823724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.823793 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.823924 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:59Z","lastTransitionTime":"2026-01-29T16:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.926983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.927058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.927082 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.927115 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:12:59 crc kubenswrapper[4895]: I0129 16:12:59.927134 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:12:59Z","lastTransitionTime":"2026-01-29T16:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.010051 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 09:58:55.453851074 +0000 UTC Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.030742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.031215 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.031358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.031501 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.031625 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:00Z","lastTransitionTime":"2026-01-29T16:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.136218 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.136609 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.136695 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.136805 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.136912 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:00Z","lastTransitionTime":"2026-01-29T16:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.240759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.241193 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.241280 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.241366 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.241435 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:00Z","lastTransitionTime":"2026-01-29T16:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.344506 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.344839 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.344927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.345012 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.345108 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:00Z","lastTransitionTime":"2026-01-29T16:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.448000 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.448063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.448072 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.448102 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.448117 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:00Z","lastTransitionTime":"2026-01-29T16:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.550966 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.551083 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.551104 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.551135 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.551156 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:00Z","lastTransitionTime":"2026-01-29T16:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.654317 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.654369 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.654381 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.654401 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.654416 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:00Z","lastTransitionTime":"2026-01-29T16:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.757927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.758414 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.758590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.758748 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.758917 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:00Z","lastTransitionTime":"2026-01-29T16:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.865092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.865180 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.865214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.865248 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.865275 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:00Z","lastTransitionTime":"2026-01-29T16:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.967979 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.968019 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.968028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.968044 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:00 crc kubenswrapper[4895]: I0129 16:13:00.968054 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:00Z","lastTransitionTime":"2026-01-29T16:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.010910 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:25:59.609020889 +0000 UTC Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.036998 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.037074 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.037088 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.037112 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:01 crc kubenswrapper[4895]: E0129 16:13:01.037279 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:01 crc kubenswrapper[4895]: E0129 16:13:01.037499 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:01 crc kubenswrapper[4895]: E0129 16:13:01.037646 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:01 crc kubenswrapper[4895]: E0129 16:13:01.037930 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.071380 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.071430 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.071449 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.071477 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.071503 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:01Z","lastTransitionTime":"2026-01-29T16:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.174607 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.174680 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.174694 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.174717 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.174730 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:01Z","lastTransitionTime":"2026-01-29T16:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.278245 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.278297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.278307 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.278325 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.278339 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:01Z","lastTransitionTime":"2026-01-29T16:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.381531 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.381589 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.381602 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.381624 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.381641 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:01Z","lastTransitionTime":"2026-01-29T16:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.484488 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.484542 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.484555 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.484633 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.484648 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:01Z","lastTransitionTime":"2026-01-29T16:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.587598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.587669 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.587686 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.587709 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.587727 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:01Z","lastTransitionTime":"2026-01-29T16:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.690754 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.690831 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.690846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.690899 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.690914 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:01Z","lastTransitionTime":"2026-01-29T16:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.793623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.793681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.793700 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.793725 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.793742 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:01Z","lastTransitionTime":"2026-01-29T16:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.896578 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.896617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.896626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.896652 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.896662 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:01Z","lastTransitionTime":"2026-01-29T16:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.998952 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.998986 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.998997 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.999011 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:01 crc kubenswrapper[4895]: I0129 16:13:01.999021 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:01Z","lastTransitionTime":"2026-01-29T16:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.011739 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:29:40.015847764 +0000 UTC Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.101397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.101442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.101455 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.101474 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.101488 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:02Z","lastTransitionTime":"2026-01-29T16:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.204712 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.204798 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.204824 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.204855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.204920 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:02Z","lastTransitionTime":"2026-01-29T16:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.308728 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.308781 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.308793 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.308812 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.308826 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:02Z","lastTransitionTime":"2026-01-29T16:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.411556 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.411638 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.411659 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.411688 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.411707 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:02Z","lastTransitionTime":"2026-01-29T16:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.514841 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.514904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.514916 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.514935 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.514947 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:02Z","lastTransitionTime":"2026-01-29T16:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.618540 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.618621 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.618637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.618668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.618684 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:02Z","lastTransitionTime":"2026-01-29T16:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.721291 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.721364 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.721377 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.721403 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.721417 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:02Z","lastTransitionTime":"2026-01-29T16:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.824562 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.824622 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.824635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.824659 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.824676 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:02Z","lastTransitionTime":"2026-01-29T16:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.927575 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.927640 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.927650 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.927671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:02 crc kubenswrapper[4895]: I0129 16:13:02.927686 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:02Z","lastTransitionTime":"2026-01-29T16:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.012381 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:56:05.228415509 +0000 UTC Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.030724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.030776 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.030790 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.030810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.030825 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.036313 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.036382 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.036308 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.036488 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.036620 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.036838 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.037022 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.037190 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.134037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.134080 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.134092 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.134110 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.134122 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.233052 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.233307 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.233444 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs podName:5113e2b8-dc97-42a1-aa1c-3d604cada8c2 nodeName:}" failed. No retries permitted until 2026-01-29 16:13:35.233415382 +0000 UTC m=+99.036392646 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs") pod "network-metrics-daemon-h9mkw" (UID: "5113e2b8-dc97-42a1-aa1c-3d604cada8c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.237345 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.237392 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.237403 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.237422 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.237434 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.340182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.340245 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.340257 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.340276 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.340334 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.443241 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.443648 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.443657 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.443674 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.443685 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.546525 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.546720 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.546746 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.546807 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.546827 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.643753 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.643815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.643826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.643845 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.643857 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.656783 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.662226 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.662346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.662472 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.662567 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.662727 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.676820 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.680949 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.681083 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.681172 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.681248 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.681307 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.697690 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.703058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.703204 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.703267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.703333 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.703401 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.716081 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.720288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.720359 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.720374 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.720397 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.720430 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.733154 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:03 crc kubenswrapper[4895]: E0129 16:13:03.733350 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.735665 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.735738 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.735754 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.735772 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.735807 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.838971 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.839017 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.839030 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.839047 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.839058 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.941212 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.941252 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.941261 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.941276 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:03 crc kubenswrapper[4895]: I0129 16:13:03.941323 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:03Z","lastTransitionTime":"2026-01-29T16:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.021062 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:05:43.639905721 +0000 UTC Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.045165 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.045209 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.045220 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.045240 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.045252 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:04Z","lastTransitionTime":"2026-01-29T16:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.148287 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.148330 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.148341 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.148361 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.148372 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:04Z","lastTransitionTime":"2026-01-29T16:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.252278 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.252334 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.252351 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.252370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.252383 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:04Z","lastTransitionTime":"2026-01-29T16:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.354926 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.354960 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.354973 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.354989 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.354998 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:04Z","lastTransitionTime":"2026-01-29T16:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.458671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.458715 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.458726 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.458745 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.458758 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:04Z","lastTransitionTime":"2026-01-29T16:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.527162 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7p5vp_dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5/kube-multus/0.log" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.527228 4895 generic.go:334] "Generic (PLEG): container finished" podID="dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5" containerID="e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244" exitCode=1 Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.527265 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7p5vp" event={"ID":"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5","Type":"ContainerDied","Data":"e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244"} Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.528178 4895 scope.go:117] "RemoveContainer" containerID="e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.548374 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.562157 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.562200 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.562212 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.562230 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.562242 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:04Z","lastTransitionTime":"2026-01-29T16:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.576668 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:41Z\\\",\\\"message\\\":\\\"manager/package-server-manager-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.110\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:12:41.627066 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 16:12:41.627086 6535 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:12:41.627085 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-lqtb8\\\\nF0129 16:12:41.627094 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.595764 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:13:04Z\\\",\\\"message\\\":\\\"2026-01-29T16:12:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398\\\\n2026-01-29T16:12:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398 to /host/opt/cni/bin/\\\\n2026-01-29T16:12:19Z [verbose] multus-daemon started\\\\n2026-01-29T16:12:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:13:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.608714 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.621785 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.635639 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.649014 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.664909 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.664942 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.664952 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.664967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.664977 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:04Z","lastTransitionTime":"2026-01-29T16:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.668689 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.683765 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.706671 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.720809 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54915014-84f4-4c65-ad36-11049c97a3c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68014e55c516cafe98fd65b5e162f27e8abb4af9675a2aaf6fecb16377fb35e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95fd8ff11220a00309543179b8c46c74ab7fd2f75a54ce16541ed217121747c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b587eb8a168e54cd2c1bef70157580a7d20cd8e378311a33996f3867de251ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.736011 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.752792 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.767290 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.768044 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.768069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.768079 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.768094 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.768103 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:04Z","lastTransitionTime":"2026-01-29T16:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.780896 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.807352 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.827333 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.841998 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.871419 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.871448 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.871459 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.871474 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.871484 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:04Z","lastTransitionTime":"2026-01-29T16:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.975293 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.975358 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.975370 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.975383 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:04 crc kubenswrapper[4895]: I0129 16:13:04.975426 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:04Z","lastTransitionTime":"2026-01-29T16:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.022042 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 10:52:23.619676166 +0000 UTC Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.036679 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.036780 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:05 crc kubenswrapper[4895]: E0129 16:13:05.036851 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:05 crc kubenswrapper[4895]: E0129 16:13:05.036979 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.037040 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:05 crc kubenswrapper[4895]: E0129 16:13:05.037107 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.037132 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:05 crc kubenswrapper[4895]: E0129 16:13:05.037191 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.078595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.078637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.078649 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.078665 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.078678 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:05Z","lastTransitionTime":"2026-01-29T16:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.188636 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.188680 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.188689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.188707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.188719 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:05Z","lastTransitionTime":"2026-01-29T16:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.291800 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.291862 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.291887 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.291905 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.291918 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:05Z","lastTransitionTime":"2026-01-29T16:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.395647 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.395700 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.395713 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.395740 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.395757 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:05Z","lastTransitionTime":"2026-01-29T16:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.499846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.499913 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.499926 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.499945 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.499957 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:05Z","lastTransitionTime":"2026-01-29T16:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.535016 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7p5vp_dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5/kube-multus/0.log" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.535106 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7p5vp" event={"ID":"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5","Type":"ContainerStarted","Data":"35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397"} Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.558849 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:41Z\\\",\\\"message\\\":\\\"manager/package-server-manager-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.110\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:12:41.627066 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 16:12:41.627086 6535 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:12:41.627085 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-lqtb8\\\\nF0129 16:12:41.627094 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.575898 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:13:04Z\\\",\\\"message\\\":\\\"2026-01-29T16:12:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398\\\\n2026-01-29T16:12:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398 to /host/opt/cni/bin/\\\\n2026-01-29T16:12:19Z [verbose] multus-daemon started\\\\n2026-01-29T16:12:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:13:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:13:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.589023 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.602971 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.603749 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.603799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.603813 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.603835 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.603848 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:05Z","lastTransitionTime":"2026-01-29T16:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.621142 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.639332 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.655564 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.668170 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.682164 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.699724 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.706883 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.706918 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.706928 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.706945 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.706956 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:05Z","lastTransitionTime":"2026-01-29T16:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.715646 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.729338 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.743443 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.759617 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.780116 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.795270 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.806432 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54915014-84f4-4c65-ad36-11049c97a3c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68014e55c516cafe98fd65b5e162f27e8abb4af9675a2aaf6fecb16377fb35e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95fd8ff11220a00309543179b8c46c74ab7fd2f75a54ce16541ed217121747c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b587eb8a168e54cd2c1bef70157580a7d20cd8e378311a33996f3867de251ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.810147 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.810200 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.810211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.810227 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.810237 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:05Z","lastTransitionTime":"2026-01-29T16:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.818523 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.914169 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.914207 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.914216 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.914232 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:05 crc kubenswrapper[4895]: I0129 16:13:05.914243 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:05Z","lastTransitionTime":"2026-01-29T16:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.017300 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.017398 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.017414 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.017435 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.017447 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:06Z","lastTransitionTime":"2026-01-29T16:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.022639 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:17:22.297909983 +0000 UTC Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.120101 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.120158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.120170 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.120190 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.120203 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:06Z","lastTransitionTime":"2026-01-29T16:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.223211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.223272 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.223282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.223300 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.223312 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:06Z","lastTransitionTime":"2026-01-29T16:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.325787 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.325832 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.325845 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.325882 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.325896 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:06Z","lastTransitionTime":"2026-01-29T16:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.428961 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.429004 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.429015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.429030 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.429044 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:06Z","lastTransitionTime":"2026-01-29T16:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.531671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.531722 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.531735 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.531754 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.531768 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:06Z","lastTransitionTime":"2026-01-29T16:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.634538 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.634605 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.634618 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.634637 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.634653 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:06Z","lastTransitionTime":"2026-01-29T16:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.737173 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.737246 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.737257 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.737296 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.737320 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:06Z","lastTransitionTime":"2026-01-29T16:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.840539 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.840610 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.840623 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.840644 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.840656 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:06Z","lastTransitionTime":"2026-01-29T16:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.943720 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.943759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.943768 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.943783 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:06 crc kubenswrapper[4895]: I0129 16:13:06.943793 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:06Z","lastTransitionTime":"2026-01-29T16:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.023736 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:30:30.008582345 +0000 UTC Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.036277 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.036464 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.036508 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.036684 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:07 crc kubenswrapper[4895]: E0129 16:13:07.036666 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:07 crc kubenswrapper[4895]: E0129 16:13:07.036767 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:07 crc kubenswrapper[4895]: E0129 16:13:07.036903 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:07 crc kubenswrapper[4895]: E0129 16:13:07.037060 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.045951 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.045974 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.045983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.045997 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.046007 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:07Z","lastTransitionTime":"2026-01-29T16:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.053050 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.070962 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.086313 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.097363 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54915014-84f4-4c65-ad36-11049c97a3c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68014e55c516cafe98fd65b5e162f27e8abb4af9675a2aaf6fecb16377fb35e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95fd8ff11220a00309543179b8c46c74ab7fd2f75a54ce16541ed217121747c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b587eb8a168e54cd2c1bef70157580a7d20cd8e378311a33996f3867de251ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.108735 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.119567 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.129975 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.139544 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.147429 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.147451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.147467 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.147482 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.147493 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:07Z","lastTransitionTime":"2026-01-29T16:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.152299 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.164609 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.178956 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.192619 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.216258 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:41Z\\\",\\\"message\\\":\\\"manager/package-server-manager-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.110\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:12:41.627066 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 16:12:41.627086 6535 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:12:41.627085 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-lqtb8\\\\nF0129 16:12:41.627094 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.229133 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:13:04Z\\\",\\\"message\\\":\\\"2026-01-29T16:12:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398\\\\n2026-01-29T16:12:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398 to /host/opt/cni/bin/\\\\n2026-01-29T16:12:19Z [verbose] multus-daemon started\\\\n2026-01-29T16:12:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:13:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:13:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.240830 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.249789 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.249842 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.249858 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.249899 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.249912 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:07Z","lastTransitionTime":"2026-01-29T16:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.254516 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.267738 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.282013 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.354848 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.354914 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.354923 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.354937 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.354947 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:07Z","lastTransitionTime":"2026-01-29T16:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.457697 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.457761 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.457773 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.457793 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.457805 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:07Z","lastTransitionTime":"2026-01-29T16:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.560911 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.561269 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.561362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.561447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.561528 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:07Z","lastTransitionTime":"2026-01-29T16:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.664927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.664967 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.664980 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.664997 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.665007 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:07Z","lastTransitionTime":"2026-01-29T16:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.767720 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.767778 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.767793 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.767850 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.767880 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:07Z","lastTransitionTime":"2026-01-29T16:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.870435 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.870516 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.870535 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.870556 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.870573 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:07Z","lastTransitionTime":"2026-01-29T16:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.972799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.972850 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.972862 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.972911 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:07 crc kubenswrapper[4895]: I0129 16:13:07.972928 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:07Z","lastTransitionTime":"2026-01-29T16:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.023898 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:12:16.901559024 +0000 UTC Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.075767 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.075818 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.075831 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.075853 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.075885 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:08Z","lastTransitionTime":"2026-01-29T16:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.179160 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.179211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.179220 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.179237 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.179250 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:08Z","lastTransitionTime":"2026-01-29T16:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.282826 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.282924 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.282940 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.282965 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.282981 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:08Z","lastTransitionTime":"2026-01-29T16:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.386239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.386282 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.386294 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.386313 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.386324 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:08Z","lastTransitionTime":"2026-01-29T16:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.488243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.488288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.488297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.488313 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.488325 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:08Z","lastTransitionTime":"2026-01-29T16:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.590541 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.590590 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.590602 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.590625 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.590636 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:08Z","lastTransitionTime":"2026-01-29T16:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.693414 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.693451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.693463 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.693479 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.693492 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:08Z","lastTransitionTime":"2026-01-29T16:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.795802 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.795894 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.795910 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.795930 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.795942 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:08Z","lastTransitionTime":"2026-01-29T16:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.898334 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.898371 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.898383 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.898400 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:08 crc kubenswrapper[4895]: I0129 16:13:08.898411 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:08Z","lastTransitionTime":"2026-01-29T16:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.000971 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.001047 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.001058 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.001075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.001087 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:09Z","lastTransitionTime":"2026-01-29T16:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.025012 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:53:16.615426547 +0000 UTC Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.037132 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.037133 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.037152 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:09 crc kubenswrapper[4895]: E0129 16:13:09.037834 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.038061 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:09 crc kubenswrapper[4895]: E0129 16:13:09.038125 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:09 crc kubenswrapper[4895]: E0129 16:13:09.038257 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:09 crc kubenswrapper[4895]: E0129 16:13:09.038371 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.104336 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.104380 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.104390 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.104409 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.104420 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:09Z","lastTransitionTime":"2026-01-29T16:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.207441 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.207490 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.207503 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.207524 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.207539 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:09Z","lastTransitionTime":"2026-01-29T16:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.310933 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.310969 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.310978 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.310993 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.311003 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:09Z","lastTransitionTime":"2026-01-29T16:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.413663 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.413712 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.413724 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.413741 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.413751 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:09Z","lastTransitionTime":"2026-01-29T16:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.516364 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.516421 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.516439 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.516461 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.516473 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:09Z","lastTransitionTime":"2026-01-29T16:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.619478 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.619553 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.619571 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.619591 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.619604 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:09Z","lastTransitionTime":"2026-01-29T16:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.722547 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.722611 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.722631 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.722656 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.722673 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:09Z","lastTransitionTime":"2026-01-29T16:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.825957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.826010 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.826022 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.826045 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.826058 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:09Z","lastTransitionTime":"2026-01-29T16:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.928442 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.928494 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.928506 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.928527 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:09 crc kubenswrapper[4895]: I0129 16:13:09.928541 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:09Z","lastTransitionTime":"2026-01-29T16:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.025807 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:37:05.127168052 +0000 UTC Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.031132 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.031196 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.031211 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.031232 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.031245 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:10Z","lastTransitionTime":"2026-01-29T16:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.134150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.134239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.134259 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.134287 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.134306 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:10Z","lastTransitionTime":"2026-01-29T16:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.236654 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.236707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.236725 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.236747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.236764 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:10Z","lastTransitionTime":"2026-01-29T16:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.339613 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.339661 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.339673 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.339695 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.339714 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:10Z","lastTransitionTime":"2026-01-29T16:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.444053 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.444102 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.444116 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.444160 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.444176 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:10Z","lastTransitionTime":"2026-01-29T16:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.547989 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.548075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.548098 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.548125 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.548143 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:10Z","lastTransitionTime":"2026-01-29T16:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.650777 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.650830 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.650842 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.650877 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.650891 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:10Z","lastTransitionTime":"2026-01-29T16:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.753527 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.753579 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.753595 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.753617 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.753634 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:10Z","lastTransitionTime":"2026-01-29T16:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.856555 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.856625 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.856645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.856672 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.856693 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:10Z","lastTransitionTime":"2026-01-29T16:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.959600 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.959655 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.959667 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.959686 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:10 crc kubenswrapper[4895]: I0129 16:13:10.959699 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:10Z","lastTransitionTime":"2026-01-29T16:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.026438 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:10:57.101699624 +0000 UTC Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.036895 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.036948 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.036973 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.036948 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:11 crc kubenswrapper[4895]: E0129 16:13:11.037118 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:11 crc kubenswrapper[4895]: E0129 16:13:11.037222 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:11 crc kubenswrapper[4895]: E0129 16:13:11.037299 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:11 crc kubenswrapper[4895]: E0129 16:13:11.038333 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.062413 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.062495 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.062514 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.062545 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.062565 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:11Z","lastTransitionTime":"2026-01-29T16:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.165945 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.166028 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.166045 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.166070 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.166089 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:11Z","lastTransitionTime":"2026-01-29T16:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.269303 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.269354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.269367 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.269388 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.269401 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:11Z","lastTransitionTime":"2026-01-29T16:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.371994 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.372035 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.372045 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.372063 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.372076 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:11Z","lastTransitionTime":"2026-01-29T16:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.475252 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.475319 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.475338 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.475364 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.475384 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:11Z","lastTransitionTime":"2026-01-29T16:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.579004 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.579064 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.579081 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.579107 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.579124 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:11Z","lastTransitionTime":"2026-01-29T16:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.682560 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.682601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.682614 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.682635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.682647 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:11Z","lastTransitionTime":"2026-01-29T16:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.785185 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.785234 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.785248 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.785268 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.785281 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:11Z","lastTransitionTime":"2026-01-29T16:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.888947 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.889016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.889037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.889060 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.889078 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:11Z","lastTransitionTime":"2026-01-29T16:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.993054 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.994012 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.994170 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.994371 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:11 crc kubenswrapper[4895]: I0129 16:13:11.994517 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:11Z","lastTransitionTime":"2026-01-29T16:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.027413 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:04:18.515121501 +0000 UTC Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.097961 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.098034 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.098054 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.098085 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.098106 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:12Z","lastTransitionTime":"2026-01-29T16:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.201166 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.201566 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.201661 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.201750 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.201848 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:12Z","lastTransitionTime":"2026-01-29T16:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.304268 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.304346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.304364 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.304387 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.304406 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:12Z","lastTransitionTime":"2026-01-29T16:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.407853 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.407918 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.407931 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.407947 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.407960 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:12Z","lastTransitionTime":"2026-01-29T16:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.511103 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.511203 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.511222 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.511248 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.511265 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:12Z","lastTransitionTime":"2026-01-29T16:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.614742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.614821 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.614840 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.614895 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.614915 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:12Z","lastTransitionTime":"2026-01-29T16:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.718051 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.718134 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.718153 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.718185 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.718211 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:12Z","lastTransitionTime":"2026-01-29T16:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.821332 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.821401 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.821418 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.821438 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.821457 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:12Z","lastTransitionTime":"2026-01-29T16:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.924834 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.924983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.925005 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.925037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:12 crc kubenswrapper[4895]: I0129 16:13:12.925057 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:12Z","lastTransitionTime":"2026-01-29T16:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.028409 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:38:42.903628944 +0000 UTC Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.028536 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.028591 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.028609 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.028635 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.028658 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.035856 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.035966 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.036070 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:13 crc kubenswrapper[4895]: E0129 16:13:13.036247 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.036500 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:13 crc kubenswrapper[4895]: E0129 16:13:13.037347 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:13 crc kubenswrapper[4895]: E0129 16:13:13.037475 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:13 crc kubenswrapper[4895]: E0129 16:13:13.037590 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.038273 4895 scope.go:117] "RemoveContainer" containerID="ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.132609 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.132676 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.132698 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.132726 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.132745 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.244090 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.244178 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.244208 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.244279 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.244319 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.347243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.347286 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.347295 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.347310 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.347321 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.450619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.450651 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.450660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.450673 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.450683 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.553755 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.553808 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.553822 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.553849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.553886 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.566713 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/2.log" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.569741 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833"} Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.570214 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.585224 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.601089 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.621499 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.644855 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.656515 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.656587 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.656598 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.656613 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.656624 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.661966 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.676691 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54915014-84f4-4c65-ad36-11049c97a3c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68014e55c516cafe98fd65b5e162f27e8abb4af9675a2aaf6fecb16377fb35e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95fd8ff11220a00309543179b8c46c74ab7fd2f75a54ce16541ed217121747c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b587eb8a168e54cd2c1bef70157580a7d20cd8e378311a33996f3867de251ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.693302 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.713570 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.728782 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.742783 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.755925 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.759895 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.759943 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.759957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.759977 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.759990 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.772541 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.790473 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.804798 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.819928 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.839271 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:41Z\\\",\\\"message\\\":\\\"manager/package-server-manager-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.110\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:12:41.627066 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 16:12:41.627086 6535 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:12:41.627085 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-lqtb8\\\\nF0129 16:12:41.627094 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:13:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.855303 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:13:04Z\\\",\\\"message\\\":\\\"2026-01-29T16:12:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398\\\\n2026-01-29T16:12:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398 to /host/opt/cni/bin/\\\\n2026-01-29T16:12:19Z [verbose] multus-daemon started\\\\n2026-01-29T16:12:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:13:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:13:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.862087 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.862145 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.862158 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.862183 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.862198 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.867187 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.913503 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.914308 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.914324 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.914346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.914363 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: E0129 16:13:13.932771 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.937399 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.937444 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.937453 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.937471 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.937483 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: E0129 16:13:13.952925 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.956394 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.956450 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.956461 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.956527 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.956550 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: E0129 16:13:13.975239 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.979945 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.979985 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.979997 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.980015 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.980031 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:13 crc kubenswrapper[4895]: E0129 16:13:13.992543 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:13Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.996545 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.996592 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.996602 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.996619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:13 crc kubenswrapper[4895]: I0129 16:13:13.996631 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:13Z","lastTransitionTime":"2026-01-29T16:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:14 crc kubenswrapper[4895]: E0129 16:13:14.007446 4895 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d92b6098-6fec-422c-9ef8-93b6ed81f7f4\\\",\\\"systemUUID\\\":\\\"fe28fa87-b659-4e7e-881f-540611df3a38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: E0129 16:13:14.007614 4895 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.009332 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.009407 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.009422 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.009444 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.009458 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:14Z","lastTransitionTime":"2026-01-29T16:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.028942 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:36:45.636514277 +0000 UTC Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.112852 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.112973 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.112993 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.113021 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.113042 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:14Z","lastTransitionTime":"2026-01-29T16:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.217170 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.217247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.217267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.217300 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.217324 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:14Z","lastTransitionTime":"2026-01-29T16:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.320157 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.320212 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.320223 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.320243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.320258 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:14Z","lastTransitionTime":"2026-01-29T16:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.423385 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.423493 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.423517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.423568 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.423585 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:14Z","lastTransitionTime":"2026-01-29T16:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.526674 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.526728 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.526746 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.526770 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.526787 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:14Z","lastTransitionTime":"2026-01-29T16:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.575395 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/3.log" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.576137 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/2.log" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.579140 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" exitCode=1 Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.579201 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833"} Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.579258 4895 scope.go:117] "RemoveContainer" containerID="ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.581208 4895 scope.go:117] "RemoveContainer" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" Jan 29 16:13:14 crc kubenswrapper[4895]: E0129 16:13:14.581782 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.599690 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.612137 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.624812 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.629772 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.629815 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.629827 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.629846 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.629860 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:14Z","lastTransitionTime":"2026-01-29T16:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.637793 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.649282 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.668196 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.688933 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.701736 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54915014-84f4-4c65-ad36-11049c97a3c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68014e55c516cafe98fd65b5e162f27e8abb4af9675a2aaf6fecb16377fb35e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95fd8ff11220a00309543179b8c46c74ab7fd2f75a54ce16541ed217121747c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b587eb8a168e54cd2c1bef70157580a7d20cd8e378311a33996f3867de251ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.713954 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.731235 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea1052df3a927f1b15463737d2e3443ac17f27ce1b546501759af29aed0d7f34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:12:41Z\\\",\\\"message\\\":\\\"manager/package-server-manager-metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.110\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:12:41.627066 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 16:12:41.627086 6535 services_controller.go:444] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:12:41.627085 6535 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-lqtb8\\\\nF0129 16:12:41.627094 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"r:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:13:13.893133 6937 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-image-registry/image-registry]} name:Service_openshift-image-registry/image-registry_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.93:5000:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {83c1e277-3d22-42ae-a355-f7a0ff0bd171}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:13:13.893827 6937 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:13:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.734298 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.734447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.734468 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.734526 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.734547 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:14Z","lastTransitionTime":"2026-01-29T16:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.745399 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:13:04Z\\\",\\\"message\\\":\\\"2026-01-29T16:12:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398\\\\n2026-01-29T16:12:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398 to /host/opt/cni/bin/\\\\n2026-01-29T16:12:19Z [verbose] multus-daemon started\\\\n2026-01-29T16:12:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:13:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:13:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.754763 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.766102 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.781683 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.799978 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.814619 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.828003 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.839974 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.840027 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.840048 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.840074 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.840096 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:14Z","lastTransitionTime":"2026-01-29T16:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.839999 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.942396 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.942440 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.942499 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.942516 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:14 crc kubenswrapper[4895]: I0129 16:13:14.942528 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:14Z","lastTransitionTime":"2026-01-29T16:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.029720 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:19:52.454315543 +0000 UTC Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.037995 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.038085 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.038011 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:15 crc kubenswrapper[4895]: E0129 16:13:15.038220 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.038267 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:15 crc kubenswrapper[4895]: E0129 16:13:15.038393 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:15 crc kubenswrapper[4895]: E0129 16:13:15.038462 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:15 crc kubenswrapper[4895]: E0129 16:13:15.038710 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.046216 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.046269 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.046288 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.046314 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.046334 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:15Z","lastTransitionTime":"2026-01-29T16:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.150052 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.150135 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.150162 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.150191 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.150210 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:15Z","lastTransitionTime":"2026-01-29T16:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.253255 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.253366 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.253385 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.253415 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.253437 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:15Z","lastTransitionTime":"2026-01-29T16:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.356988 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.357073 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.357097 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.357120 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.357137 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:15Z","lastTransitionTime":"2026-01-29T16:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.460841 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.460906 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.460918 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.460937 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.460948 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:15Z","lastTransitionTime":"2026-01-29T16:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.564808 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.564904 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.564922 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.564949 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.564969 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:15Z","lastTransitionTime":"2026-01-29T16:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.587065 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/3.log" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.591434 4895 scope.go:117] "RemoveContainer" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" Jan 29 16:13:15 crc kubenswrapper[4895]: E0129 16:13:15.591715 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.607265 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.624722 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.644034 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.668105 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.668192 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.668213 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.668240 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.668259 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:15Z","lastTransitionTime":"2026-01-29T16:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.678281 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"r:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:13:13.893133 6937 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-image-registry/image-registry]} name:Service_openshift-image-registry/image-registry_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.93:5000:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {83c1e277-3d22-42ae-a355-f7a0ff0bd171}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:13:13.893827 6937 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:13:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.699077 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:13:04Z\\\",\\\"message\\\":\\\"2026-01-29T16:12:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398\\\\n2026-01-29T16:12:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398 to /host/opt/cni/bin/\\\\n2026-01-29T16:12:19Z [verbose] multus-daemon started\\\\n2026-01-29T16:12:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:13:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:13:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.718636 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.740793 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.762965 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.772284 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.772331 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.772341 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.772360 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.772371 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:15Z","lastTransitionTime":"2026-01-29T16:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.781943 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.802044 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.832449 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.850745 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.867158 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54915014-84f4-4c65-ad36-11049c97a3c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68014e55c516cafe98fd65b5e162f27e8abb4af9675a2aaf6fecb16377fb35e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95fd8ff11220a00309543179b8c46c74ab7fd2f75a54ce16541ed217121747c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b587eb8a168e54cd2c1bef70157580a7d20cd8e378311a33996f3867de251ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.875204 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.875261 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.875273 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.875298 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.875311 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:15Z","lastTransitionTime":"2026-01-29T16:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.889599 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.912614 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.935258 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.953596 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.977410 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:15Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.980033 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.980093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.980108 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.980134 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:15 crc kubenswrapper[4895]: I0129 16:13:15.980150 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:15Z","lastTransitionTime":"2026-01-29T16:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.030722 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:56:32.266350255 +0000 UTC Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.083265 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.083701 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.083844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.084017 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.084135 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:16Z","lastTransitionTime":"2026-01-29T16:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.187548 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.188187 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.188451 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.188693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.189546 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:16Z","lastTransitionTime":"2026-01-29T16:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.294345 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.294445 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.294491 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.294520 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.294538 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:16Z","lastTransitionTime":"2026-01-29T16:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.397884 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.397930 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.398127 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.398151 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.398164 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:16Z","lastTransitionTime":"2026-01-29T16:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.502099 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.502601 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.502707 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.502806 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.502910 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:16Z","lastTransitionTime":"2026-01-29T16:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.606360 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.606436 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.606452 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.606482 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.606499 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:16Z","lastTransitionTime":"2026-01-29T16:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.710410 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.710981 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.711179 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.711393 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.711583 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:16Z","lastTransitionTime":"2026-01-29T16:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.815562 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.815619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.815639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.815664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.815682 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:16Z","lastTransitionTime":"2026-01-29T16:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.919661 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.919742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.919764 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.919799 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:16 crc kubenswrapper[4895]: I0129 16:13:16.919821 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:16Z","lastTransitionTime":"2026-01-29T16:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.023416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.023483 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.023497 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.023517 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.023531 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:17Z","lastTransitionTime":"2026-01-29T16:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.031294 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 21:38:28.448006644 +0000 UTC Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.035750 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:17 crc kubenswrapper[4895]: E0129 16:13:17.035959 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.036211 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.036251 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.036288 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:17 crc kubenswrapper[4895]: E0129 16:13:17.036382 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:17 crc kubenswrapper[4895]: E0129 16:13:17.036554 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:17 crc kubenswrapper[4895]: E0129 16:13:17.036619 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.058290 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06b40904-c383-49fb-ad57-0b9a39a1e7e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c53a8e911a59c884a1c9d814ab3347d5a8e0bed226dcf6dce61b773897c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2fa5bf00ae7c3e042c9d64b4ebc400173c62aa84ca4d464feed388fcbd3cd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ba487e6a0bfe6ca04bc2da472e0f94c216dc28c213d92fdde3ca84c1a0fa86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.080943 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7681e030220fb6d37999857c4db2fedf2ef20cbf65625e692e3ad3f3e3683ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.103125 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.122605 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.127778 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.127836 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.127848 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.127879 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.127892 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:17Z","lastTransitionTime":"2026-01-29T16:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.157633 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:13:13Z\\\",\\\"message\\\":\\\"r:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:13:13.893133 6937 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-image-registry/image-registry]} name:Service_openshift-image-registry/image-registry_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.93:5000:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {83c1e277-3d22-42ae-a355-f7a0ff0bd171}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:13:13.893827 6937 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:13:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6f9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j8c5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.178338 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7p5vp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:13:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:13:04Z\\\",\\\"message\\\":\\\"2026-01-29T16:12:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398\\\\n2026-01-29T16:12:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ed1e5929-a458-4358-b7e9-6ed7f9a78398 to /host/opt/cni/bin/\\\\n2026-01-29T16:12:19Z [verbose] multus-daemon started\\\\n2026-01-29T16:12:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:13:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:13:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vm6qt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7p5vp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.196064 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-s4vrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e805b13-f27f-4252-a4aa-22689d6dc656\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b644a6b5e7d0d7b3e238a0e3a52256869dcb4783a372866318f4f4cccf4e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssnsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-s4vrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.216459 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.230407 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.230444 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.230453 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.230468 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.230478 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:17Z","lastTransitionTime":"2026-01-29T16:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.234683 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9af81de5-cf3e-4437-b9c1-32ef1495f362\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac80d08383500ed9da27362830cbcbbd21971e0816d01afa833534ff0e680f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rqjm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qh8vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.260497 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h44k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ef1a9b0-7dd8-4727-8ec6-b35d5cff026b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8919efa4f613b421ea9b1f2fa4cb64f24e0d7039f6afc1946bd95b0d0597a013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68e7e69cbbbcbdc9981e65e597957639cd5e0c6bca0ef1b5f95cb5288dd80ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878cb715c65f12a1474d23aaa318e39f78fb9738a7ff6d10e96d272f13e630a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f2b406f62c3ca2bb3a1d2a409a413cf15f07abca08e9c93945f8ee66403e27c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbb4a0ff9c42dc2a854bf7209f69f77d436ac57b8e91de21842783342567198\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50b67250e909e21341ecd9ecd4e1a12a16153487553e7849462907fd89de4209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d83762e1b454a96f83d590ece1796f95460fc53f7764b3d97029f3c121040c3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwbnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h44k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.296749 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02f73fca-777f-4611-8f70-09054b8d4239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51baa3e82199c2dd75465d3f700888116a03fba65d0a2d0889b6bb00a6c4384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe72a7aaf95a6745fda0dd4716ad30d54333d9e318d656116acf7091ab619f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78d173cbbeb381f453a566e874b5710e1917b2f9b394670399a1631b7bad57b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://57226ac58bef8836d5b57ba740f4e5eb3c1c0f05e396ceed6ee9a53be64290f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7029c0fa106d859297ba09a4bde7d706fc73b3a7a1c1dc70e3d94fc88829007e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4b025630071495c3685f918d15db88a87f23138a25facf83803787a96d35bb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6f6c95b046c4be2debcc17e044b834d3ae1eaec248aed522e75b4d088eaf066\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43a42eec15c6fc0b14ec8c5d8f024eefd3d72fefc65e3aaa7a166f990bf3a97e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.320029 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de1f2b4f-67c3-46de-b3c9-6d005a487da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 16:12:10.832504 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:12:10.834351 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1393857592/tls.crt::/tmp/serving-cert-1393857592/tls.key\\\\\\\"\\\\nI0129 16:12:16.541660 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:12:16.546064 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:12:16.546099 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:12:16.546123 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:12:16.546129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:12:16.556105 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:12:16.556140 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:12:16.556152 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:12:16.556155 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:12:16.556159 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:12:16.556163 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:12:16.556445 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:12:16.558942 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.334178 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.334607 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.334747 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.338987 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54915014-84f4-4c65-ad36-11049c97a3c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e68014e55c516cafe98fd65b5e162f27e8abb4af9675a2aaf6fecb16377fb35e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95fd8ff11220a00309543179b8c46c74ab7fd2f75a54ce16541ed217121747c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b587eb8a168e54cd2c1bef70157580a7d20cd8e378311a33996f3867de251ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a2e66d04777d82ffa74bd642de109e6686cd97c0a3fe56c2ce2b6414b2d3f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:11:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:11:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.344169 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.344272 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:17Z","lastTransitionTime":"2026-01-29T16:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.358502 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ef2e4f69e692375d2ce45207b5a97e027c106359b0590997d8a527ce6c2a6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c97a6758822ea7935932b56106465e503a67e7c77bee9c97b9ed7ce21d8b16ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.375395 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a3a34aee0b0b2cc848cd1517ff0399bb44022e0f2eae2a6cdf471c5c65cac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.389692 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-lqtb8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f95c0cb8-ec4a-4478-abe6-ccfd24db2b97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcb7c0eb43128584a6ebc19ca4032e77e5103ffa5f2bda2b015b811291b8dcad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jv4d5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lqtb8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.406657 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16645b28-c655-4cea-b3cb-f522232734f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21e2f8490f2cafdd837ffa465ead2cf783e1566e3dc680133e408a8d789444b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1245971d7881643410179de86c82f4b89f49a5b3c50f634ef706dcde144799c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:12:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pfd4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wr56f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.422670 4895 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:12:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jnfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:12:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-h9mkw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:13:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.446844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.446924 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.446942 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.446963 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.446978 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:17Z","lastTransitionTime":"2026-01-29T16:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.550681 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.550782 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.550812 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.550849 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.550921 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:17Z","lastTransitionTime":"2026-01-29T16:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.654761 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.654831 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.654848 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.654900 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.654917 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:17Z","lastTransitionTime":"2026-01-29T16:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.758354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.758431 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.758456 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.758561 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.758664 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:17Z","lastTransitionTime":"2026-01-29T16:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.862671 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.862766 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.862786 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.862819 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.862840 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:17Z","lastTransitionTime":"2026-01-29T16:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.967320 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.967393 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.967417 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.967447 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:17 crc kubenswrapper[4895]: I0129 16:13:17.967469 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:17Z","lastTransitionTime":"2026-01-29T16:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.032470 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:36:58.815155376 +0000 UTC Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.071323 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.071373 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.071384 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.071402 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.071414 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:18Z","lastTransitionTime":"2026-01-29T16:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.175024 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.175376 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.175479 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.175583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.175668 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:18Z","lastTransitionTime":"2026-01-29T16:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.278689 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.278731 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.278742 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.278759 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.278772 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:18Z","lastTransitionTime":"2026-01-29T16:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.382275 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.382340 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.382354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.382377 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.382392 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:18Z","lastTransitionTime":"2026-01-29T16:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.485230 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.485286 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.485305 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.485326 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.485339 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:18Z","lastTransitionTime":"2026-01-29T16:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.588792 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.588844 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.588855 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.588914 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.588933 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:18Z","lastTransitionTime":"2026-01-29T16:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.692626 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.692686 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.692705 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.692733 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.692759 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:18Z","lastTransitionTime":"2026-01-29T16:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.797108 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.797182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.797201 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.797230 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.797256 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:18Z","lastTransitionTime":"2026-01-29T16:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.901116 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.901190 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.901206 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.901230 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:18 crc kubenswrapper[4895]: I0129 16:13:18.901247 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:18Z","lastTransitionTime":"2026-01-29T16:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.004214 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.004273 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.004286 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.004304 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.004316 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:19Z","lastTransitionTime":"2026-01-29T16:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.033348 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:44:53.105862436 +0000 UTC Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.036894 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.037061 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.037312 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:19 crc kubenswrapper[4895]: E0129 16:13:19.037317 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.037422 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:19 crc kubenswrapper[4895]: E0129 16:13:19.037632 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:19 crc kubenswrapper[4895]: E0129 16:13:19.037684 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:19 crc kubenswrapper[4895]: E0129 16:13:19.037782 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.108013 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.108118 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.108149 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.108220 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.108242 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:19Z","lastTransitionTime":"2026-01-29T16:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.212569 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.212645 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.212664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.212693 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.212713 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:19Z","lastTransitionTime":"2026-01-29T16:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.316464 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.316586 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.316636 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.316664 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.316684 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:19Z","lastTransitionTime":"2026-01-29T16:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.420143 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.420212 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.420240 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.420271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.420297 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:19Z","lastTransitionTime":"2026-01-29T16:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.524511 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.524604 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.524632 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.524668 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.524689 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:19Z","lastTransitionTime":"2026-01-29T16:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.629424 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.629484 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.629502 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.629530 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.629551 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:19Z","lastTransitionTime":"2026-01-29T16:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.734346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.734410 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.734423 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.734446 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.734463 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:19Z","lastTransitionTime":"2026-01-29T16:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.838229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.838281 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.838295 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.838315 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.838329 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:19Z","lastTransitionTime":"2026-01-29T16:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.942144 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.942206 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.942223 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.942244 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:19 crc kubenswrapper[4895]: I0129 16:13:19.942259 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:19Z","lastTransitionTime":"2026-01-29T16:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.034379 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:37:39.973969119 +0000 UTC Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.046116 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.046186 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.046205 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.046231 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.046267 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:20Z","lastTransitionTime":"2026-01-29T16:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.150161 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.150224 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.150247 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.150311 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.150336 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:20Z","lastTransitionTime":"2026-01-29T16:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.253946 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.254075 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.254089 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.254112 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.254125 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:20Z","lastTransitionTime":"2026-01-29T16:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.357932 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.358007 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.358026 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.358055 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.358075 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:20Z","lastTransitionTime":"2026-01-29T16:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.461251 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.461323 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.461346 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.461375 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.461393 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:20Z","lastTransitionTime":"2026-01-29T16:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.565339 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.565402 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.565416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.565438 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.565456 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:20Z","lastTransitionTime":"2026-01-29T16:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.670156 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.670243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.670267 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.670297 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.670318 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:20Z","lastTransitionTime":"2026-01-29T16:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.774636 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.775507 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.775750 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.775952 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.776103 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:20Z","lastTransitionTime":"2026-01-29T16:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.880284 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.880366 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.880396 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.880428 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.880450 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:20Z","lastTransitionTime":"2026-01-29T16:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.947933 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.948116 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:20 crc kubenswrapper[4895]: E0129 16:13:20.948257 4895 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:13:20 crc kubenswrapper[4895]: E0129 16:13:20.948263 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.948207848 +0000 UTC m=+148.751185152 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:13:20 crc kubenswrapper[4895]: E0129 16:13:20.948353 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.948325271 +0000 UTC m=+148.751302565 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.948419 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:20 crc kubenswrapper[4895]: E0129 16:13:20.948631 4895 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:13:20 crc kubenswrapper[4895]: E0129 16:13:20.948776 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.948744923 +0000 UTC m=+148.751722377 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.983614 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.983672 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.983691 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.983718 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:20 crc kubenswrapper[4895]: I0129 16:13:20.983736 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:20Z","lastTransitionTime":"2026-01-29T16:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.035596 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:32:11.200940868 +0000 UTC Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.035970 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.036024 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.036149 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.036283 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.036395 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.036572 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.036615 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.036695 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.049296 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.049375 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.049619 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.049646 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.049667 4895 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.049739 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.049715098 +0000 UTC m=+148.852692392 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.050047 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.050069 4895 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.050085 4895 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:13:21 crc kubenswrapper[4895]: E0129 16:13:21.050146 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.050130829 +0000 UTC m=+148.853108123 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.091076 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.091185 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.091210 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.091243 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.091266 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:21Z","lastTransitionTime":"2026-01-29T16:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.195583 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.195658 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.195677 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.195702 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.195721 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:21Z","lastTransitionTime":"2026-01-29T16:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.299412 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.299488 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.299505 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.299534 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.299553 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:21Z","lastTransitionTime":"2026-01-29T16:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.403551 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.403625 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.403639 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.403660 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.403682 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:21Z","lastTransitionTime":"2026-01-29T16:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.512901 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.513040 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.513128 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.513237 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.513256 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:21Z","lastTransitionTime":"2026-01-29T16:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.617983 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.618050 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.618068 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.618093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.618111 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:21Z","lastTransitionTime":"2026-01-29T16:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.721596 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.721666 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.721684 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.721708 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.721727 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:21Z","lastTransitionTime":"2026-01-29T16:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.825810 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.825924 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.825955 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.825987 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.826007 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:21Z","lastTransitionTime":"2026-01-29T16:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.929812 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.929894 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.929912 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.929949 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:21 crc kubenswrapper[4895]: I0129 16:13:21.929963 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:21Z","lastTransitionTime":"2026-01-29T16:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.033088 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.033131 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.033141 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.033157 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.033168 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:22Z","lastTransitionTime":"2026-01-29T16:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.035949 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:44:02.824619228 +0000 UTC Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.136154 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.136197 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.136213 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.136229 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.136240 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:22Z","lastTransitionTime":"2026-01-29T16:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.239986 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.240055 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.240069 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.240093 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.240108 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:22Z","lastTransitionTime":"2026-01-29T16:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.344155 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.344239 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.344263 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.344293 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.344312 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:22Z","lastTransitionTime":"2026-01-29T16:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.447628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.447717 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.447739 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.447769 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.447792 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:22Z","lastTransitionTime":"2026-01-29T16:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.551656 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.551726 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.551749 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.551782 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.551806 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:22Z","lastTransitionTime":"2026-01-29T16:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.656344 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.656427 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.656449 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.656480 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.656505 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:22Z","lastTransitionTime":"2026-01-29T16:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.760319 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.760420 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.760439 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.760468 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.760486 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:22Z","lastTransitionTime":"2026-01-29T16:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.863037 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.863127 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.863150 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.863182 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.863206 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:22Z","lastTransitionTime":"2026-01-29T16:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.966459 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.966833 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.967105 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.967582 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:22 crc kubenswrapper[4895]: I0129 16:13:22.967696 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:22Z","lastTransitionTime":"2026-01-29T16:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.036638 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:50:28.536579582 +0000 UTC Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.037350 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.037435 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.037523 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.037731 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:23 crc kubenswrapper[4895]: E0129 16:13:23.037967 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:23 crc kubenswrapper[4895]: E0129 16:13:23.038135 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:23 crc kubenswrapper[4895]: E0129 16:13:23.038268 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:23 crc kubenswrapper[4895]: E0129 16:13:23.038358 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.071628 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.071687 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.071706 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.071732 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.071755 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:23Z","lastTransitionTime":"2026-01-29T16:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.175354 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.175764 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.175832 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.175927 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.176019 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:23Z","lastTransitionTime":"2026-01-29T16:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.280016 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.280401 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.280575 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.280778 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.281014 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:23Z","lastTransitionTime":"2026-01-29T16:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.384957 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.385048 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.385073 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.385138 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.385167 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:23Z","lastTransitionTime":"2026-01-29T16:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.487549 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.487591 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.487602 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.487619 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.487633 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:23Z","lastTransitionTime":"2026-01-29T16:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.589831 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.590175 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.590268 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.590383 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.590487 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:23Z","lastTransitionTime":"2026-01-29T16:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.693828 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.694362 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.694530 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.694674 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.694815 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:23Z","lastTransitionTime":"2026-01-29T16:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.798672 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.799108 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.799271 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.799460 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.799605 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:23Z","lastTransitionTime":"2026-01-29T16:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.950771 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.950958 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.951010 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.951054 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:23 crc kubenswrapper[4895]: I0129 16:13:23.951081 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:23Z","lastTransitionTime":"2026-01-29T16:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.037338 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:23:31.019867212 +0000 UTC Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.054274 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.054336 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.054349 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.054367 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.054381 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:24Z","lastTransitionTime":"2026-01-29T16:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.055339 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.157351 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.157421 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.157438 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.157465 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.157483 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:24Z","lastTransitionTime":"2026-01-29T16:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.192337 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.192386 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.192395 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.192416 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.192429 4895 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:13:24Z","lastTransitionTime":"2026-01-29T16:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.252515 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm"] Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.253148 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.257298 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.257393 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.258009 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.259525 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.290796 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podStartSLOduration=68.290764953 podStartE2EDuration="1m8.290764953s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:24.290718742 +0000 UTC m=+88.093696036" watchObservedRunningTime="2026-01-29 16:13:24.290764953 +0000 UTC m=+88.093742257" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.310934 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8h44k" podStartSLOduration=68.310907898 podStartE2EDuration="1m8.310907898s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:24.30985049 +0000 UTC m=+88.112827784" watchObservedRunningTime="2026-01-29 16:13:24.310907898 +0000 UTC m=+88.113885172" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.366265 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=39.366235878 podStartE2EDuration="39.366235878s" podCreationTimestamp="2026-01-29 16:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:24.34342706 +0000 UTC m=+88.146404334" watchObservedRunningTime="2026-01-29 16:13:24.366235878 +0000 UTC m=+88.169213172" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.390098 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd3617e-11e2-4118-90ab-8694ca9a5436-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.390162 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd3617e-11e2-4118-90ab-8694ca9a5436-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.390198 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0bd3617e-11e2-4118-90ab-8694ca9a5436-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.390218 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0bd3617e-11e2-4118-90ab-8694ca9a5436-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.390404 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0bd3617e-11e2-4118-90ab-8694ca9a5436-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.397126 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-lqtb8" podStartSLOduration=68.397105344 podStartE2EDuration="1m8.397105344s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:24.396087676 +0000 UTC m=+88.199064960" watchObservedRunningTime="2026-01-29 16:13:24.397105344 +0000 UTC m=+88.200082638" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.409479 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wr56f" podStartSLOduration=67.409449638 podStartE2EDuration="1m7.409449638s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:24.409310494 +0000 UTC m=+88.212287758" watchObservedRunningTime="2026-01-29 16:13:24.409449638 +0000 UTC m=+88.212426912" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.426603 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=0.426575552 podStartE2EDuration="426.575552ms" podCreationTimestamp="2026-01-29 16:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:24.426399537 +0000 UTC m=+88.229376841" watchObservedRunningTime="2026-01-29 16:13:24.426575552 +0000 UTC m=+88.229552816" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.462692 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=64.46266635 podStartE2EDuration="1m4.46266635s" podCreationTimestamp="2026-01-29 16:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:24.462038323 +0000 UTC m=+88.265015597" watchObservedRunningTime="2026-01-29 16:13:24.46266635 +0000 UTC m=+88.265643614" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.492093 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd3617e-11e2-4118-90ab-8694ca9a5436-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.492192 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd3617e-11e2-4118-90ab-8694ca9a5436-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.492258 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0bd3617e-11e2-4118-90ab-8694ca9a5436-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.492293 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0bd3617e-11e2-4118-90ab-8694ca9a5436-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.492355 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0bd3617e-11e2-4118-90ab-8694ca9a5436-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.492439 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0bd3617e-11e2-4118-90ab-8694ca9a5436-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.492533 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0bd3617e-11e2-4118-90ab-8694ca9a5436-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.494007 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0bd3617e-11e2-4118-90ab-8694ca9a5436-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.496588 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=67.496564058 podStartE2EDuration="1m7.496564058s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:24.481076689 +0000 UTC m=+88.284053963" watchObservedRunningTime="2026-01-29 16:13:24.496564058 +0000 UTC m=+88.299541322" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.504129 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bd3617e-11e2-4118-90ab-8694ca9a5436-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.510536 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0bd3617e-11e2-4118-90ab-8694ca9a5436-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xj9hm\" (UID: \"0bd3617e-11e2-4118-90ab-8694ca9a5436\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.560632 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7p5vp" podStartSLOduration=68.560611603 podStartE2EDuration="1m8.560611603s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:24.560041968 +0000 UTC m=+88.363019262" watchObservedRunningTime="2026-01-29 16:13:24.560611603 +0000 UTC m=+88.363588867" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.571758 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-s4vrx" podStartSLOduration=68.571737965 podStartE2EDuration="1m8.571737965s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:24.571402016 +0000 UTC m=+88.374379280" watchObservedRunningTime="2026-01-29 16:13:24.571737965 +0000 UTC m=+88.374715229" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.574768 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.605040 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=67.605016116 podStartE2EDuration="1m7.605016116s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:24.591596573 +0000 UTC m=+88.394573857" watchObservedRunningTime="2026-01-29 16:13:24.605016116 +0000 UTC m=+88.407993380" Jan 29 16:13:24 crc kubenswrapper[4895]: I0129 16:13:24.630943 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" event={"ID":"0bd3617e-11e2-4118-90ab-8694ca9a5436","Type":"ContainerStarted","Data":"93396f7afa4ff2d0d48980700fd6dc65a5b1d18b3f56eed0c5621d3ed91dd843"} Jan 29 16:13:25 crc kubenswrapper[4895]: I0129 16:13:25.036560 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:25 crc kubenswrapper[4895]: E0129 16:13:25.036806 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:25 crc kubenswrapper[4895]: I0129 16:13:25.037308 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:25 crc kubenswrapper[4895]: I0129 16:13:25.037386 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:25 crc kubenswrapper[4895]: I0129 16:13:25.037414 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:25 crc kubenswrapper[4895]: E0129 16:13:25.037561 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:25 crc kubenswrapper[4895]: E0129 16:13:25.037979 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:25 crc kubenswrapper[4895]: E0129 16:13:25.038020 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:25 crc kubenswrapper[4895]: I0129 16:13:25.038361 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:29:12.710788735 +0000 UTC Jan 29 16:13:25 crc kubenswrapper[4895]: I0129 16:13:25.038433 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 16:13:25 crc kubenswrapper[4895]: I0129 16:13:25.051642 4895 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 16:13:25 crc kubenswrapper[4895]: I0129 16:13:25.636067 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" event={"ID":"0bd3617e-11e2-4118-90ab-8694ca9a5436","Type":"ContainerStarted","Data":"0eca4a647c281f2d53c50ea17163df1e43cfa397179e3f0bea1326d9991f445d"} Jan 29 16:13:25 crc kubenswrapper[4895]: I0129 16:13:25.651308 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xj9hm" podStartSLOduration=69.651284781 podStartE2EDuration="1m9.651284781s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:25.649953025 +0000 UTC m=+89.452930309" watchObservedRunningTime="2026-01-29 16:13:25.651284781 +0000 UTC m=+89.454262055" Jan 29 16:13:27 crc kubenswrapper[4895]: I0129 16:13:27.036049 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:27 crc kubenswrapper[4895]: I0129 16:13:27.036137 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:27 crc kubenswrapper[4895]: E0129 16:13:27.037458 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:27 crc kubenswrapper[4895]: I0129 16:13:27.037544 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:27 crc kubenswrapper[4895]: I0129 16:13:27.037671 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:27 crc kubenswrapper[4895]: E0129 16:13:27.037848 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:27 crc kubenswrapper[4895]: E0129 16:13:27.038090 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:27 crc kubenswrapper[4895]: E0129 16:13:27.038212 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:29 crc kubenswrapper[4895]: I0129 16:13:29.036345 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:29 crc kubenswrapper[4895]: E0129 16:13:29.037087 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:29 crc kubenswrapper[4895]: I0129 16:13:29.036786 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:29 crc kubenswrapper[4895]: I0129 16:13:29.037257 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:29 crc kubenswrapper[4895]: I0129 16:13:29.036402 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:29 crc kubenswrapper[4895]: E0129 16:13:29.037350 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:29 crc kubenswrapper[4895]: E0129 16:13:29.037582 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:29 crc kubenswrapper[4895]: E0129 16:13:29.037750 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:30 crc kubenswrapper[4895]: I0129 16:13:30.037502 4895 scope.go:117] "RemoveContainer" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" Jan 29 16:13:30 crc kubenswrapper[4895]: E0129 16:13:30.038168 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" Jan 29 16:13:31 crc kubenswrapper[4895]: I0129 16:13:31.036045 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:31 crc kubenswrapper[4895]: I0129 16:13:31.036087 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:31 crc kubenswrapper[4895]: I0129 16:13:31.036124 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:31 crc kubenswrapper[4895]: E0129 16:13:31.036852 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:31 crc kubenswrapper[4895]: E0129 16:13:31.037019 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:31 crc kubenswrapper[4895]: I0129 16:13:31.036156 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:31 crc kubenswrapper[4895]: E0129 16:13:31.037983 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:31 crc kubenswrapper[4895]: E0129 16:13:31.038473 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:33 crc kubenswrapper[4895]: I0129 16:13:33.036207 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:33 crc kubenswrapper[4895]: I0129 16:13:33.036288 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:33 crc kubenswrapper[4895]: I0129 16:13:33.036220 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:33 crc kubenswrapper[4895]: I0129 16:13:33.036408 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:33 crc kubenswrapper[4895]: E0129 16:13:33.036555 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:33 crc kubenswrapper[4895]: E0129 16:13:33.036718 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:33 crc kubenswrapper[4895]: E0129 16:13:33.036820 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:33 crc kubenswrapper[4895]: E0129 16:13:33.036953 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:35 crc kubenswrapper[4895]: I0129 16:13:35.035995 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:35 crc kubenswrapper[4895]: I0129 16:13:35.036063 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:35 crc kubenswrapper[4895]: I0129 16:13:35.036202 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:35 crc kubenswrapper[4895]: I0129 16:13:35.036250 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:35 crc kubenswrapper[4895]: E0129 16:13:35.036276 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:35 crc kubenswrapper[4895]: E0129 16:13:35.036522 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:35 crc kubenswrapper[4895]: E0129 16:13:35.036737 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:35 crc kubenswrapper[4895]: E0129 16:13:35.036953 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:35 crc kubenswrapper[4895]: I0129 16:13:35.326786 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:35 crc kubenswrapper[4895]: E0129 16:13:35.327097 4895 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:13:35 crc kubenswrapper[4895]: E0129 16:13:35.327262 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs podName:5113e2b8-dc97-42a1-aa1c-3d604cada8c2 nodeName:}" failed. No retries permitted until 2026-01-29 16:14:39.32722312 +0000 UTC m=+163.130200564 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs") pod "network-metrics-daemon-h9mkw" (UID: "5113e2b8-dc97-42a1-aa1c-3d604cada8c2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:13:37 crc kubenswrapper[4895]: I0129 16:13:37.036171 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:37 crc kubenswrapper[4895]: I0129 16:13:37.036230 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:37 crc kubenswrapper[4895]: I0129 16:13:37.036323 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:37 crc kubenswrapper[4895]: E0129 16:13:37.037511 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:37 crc kubenswrapper[4895]: I0129 16:13:37.037550 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:37 crc kubenswrapper[4895]: E0129 16:13:37.037771 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:37 crc kubenswrapper[4895]: E0129 16:13:37.037923 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:37 crc kubenswrapper[4895]: E0129 16:13:37.038186 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:39 crc kubenswrapper[4895]: I0129 16:13:39.036009 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:39 crc kubenswrapper[4895]: I0129 16:13:39.036010 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:39 crc kubenswrapper[4895]: E0129 16:13:39.037186 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:39 crc kubenswrapper[4895]: I0129 16:13:39.036324 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:39 crc kubenswrapper[4895]: I0129 16:13:39.036196 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:39 crc kubenswrapper[4895]: E0129 16:13:39.037408 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:39 crc kubenswrapper[4895]: E0129 16:13:39.037499 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:39 crc kubenswrapper[4895]: E0129 16:13:39.037642 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:41 crc kubenswrapper[4895]: I0129 16:13:41.036528 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:41 crc kubenswrapper[4895]: I0129 16:13:41.036627 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:41 crc kubenswrapper[4895]: I0129 16:13:41.036737 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:41 crc kubenswrapper[4895]: I0129 16:13:41.037019 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:41 crc kubenswrapper[4895]: E0129 16:13:41.037226 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:41 crc kubenswrapper[4895]: I0129 16:13:41.038914 4895 scope.go:117] "RemoveContainer" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" Jan 29 16:13:41 crc kubenswrapper[4895]: E0129 16:13:41.039072 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:41 crc kubenswrapper[4895]: E0129 16:13:41.039228 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j8c5m_openshift-ovn-kubernetes(b00f5c7f-4264-4580-9c5a-ace62ee4b87d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" Jan 29 16:13:41 crc kubenswrapper[4895]: E0129 16:13:41.039379 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:41 crc kubenswrapper[4895]: E0129 16:13:41.039490 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:43 crc kubenswrapper[4895]: I0129 16:13:43.036640 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:43 crc kubenswrapper[4895]: I0129 16:13:43.036719 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:43 crc kubenswrapper[4895]: I0129 16:13:43.036640 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:43 crc kubenswrapper[4895]: E0129 16:13:43.036934 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:43 crc kubenswrapper[4895]: I0129 16:13:43.036761 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:43 crc kubenswrapper[4895]: E0129 16:13:43.037055 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:43 crc kubenswrapper[4895]: E0129 16:13:43.037143 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:43 crc kubenswrapper[4895]: E0129 16:13:43.037256 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:45 crc kubenswrapper[4895]: I0129 16:13:45.036106 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:45 crc kubenswrapper[4895]: I0129 16:13:45.036176 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:45 crc kubenswrapper[4895]: I0129 16:13:45.036105 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:45 crc kubenswrapper[4895]: I0129 16:13:45.036309 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:45 crc kubenswrapper[4895]: E0129 16:13:45.036481 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:45 crc kubenswrapper[4895]: E0129 16:13:45.036825 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:45 crc kubenswrapper[4895]: E0129 16:13:45.037009 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:45 crc kubenswrapper[4895]: E0129 16:13:45.037188 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:47 crc kubenswrapper[4895]: I0129 16:13:47.036257 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:47 crc kubenswrapper[4895]: I0129 16:13:47.036313 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:47 crc kubenswrapper[4895]: I0129 16:13:47.036257 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:47 crc kubenswrapper[4895]: I0129 16:13:47.036421 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:47 crc kubenswrapper[4895]: E0129 16:13:47.037572 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:47 crc kubenswrapper[4895]: E0129 16:13:47.037708 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:47 crc kubenswrapper[4895]: E0129 16:13:47.037813 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:47 crc kubenswrapper[4895]: E0129 16:13:47.037904 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:49 crc kubenswrapper[4895]: I0129 16:13:49.036663 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:49 crc kubenswrapper[4895]: I0129 16:13:49.036810 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:49 crc kubenswrapper[4895]: E0129 16:13:49.036930 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:49 crc kubenswrapper[4895]: I0129 16:13:49.036826 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:49 crc kubenswrapper[4895]: E0129 16:13:49.037060 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:49 crc kubenswrapper[4895]: I0129 16:13:49.037098 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:49 crc kubenswrapper[4895]: E0129 16:13:49.037269 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:49 crc kubenswrapper[4895]: E0129 16:13:49.037532 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:50 crc kubenswrapper[4895]: I0129 16:13:50.741543 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7p5vp_dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5/kube-multus/1.log" Jan 29 16:13:50 crc kubenswrapper[4895]: I0129 16:13:50.742625 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7p5vp_dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5/kube-multus/0.log" Jan 29 16:13:50 crc kubenswrapper[4895]: I0129 16:13:50.742678 4895 generic.go:334] "Generic (PLEG): container finished" podID="dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5" containerID="35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397" exitCode=1 Jan 29 16:13:50 crc kubenswrapper[4895]: I0129 16:13:50.742728 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7p5vp" event={"ID":"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5","Type":"ContainerDied","Data":"35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397"} Jan 29 16:13:50 crc kubenswrapper[4895]: I0129 16:13:50.742787 4895 scope.go:117] "RemoveContainer" containerID="e1ea5ca53c66f8436bbeaa19e4458b52a29e95df0883aa2db67309edb59df244" Jan 29 16:13:50 crc kubenswrapper[4895]: I0129 16:13:50.743414 4895 scope.go:117] "RemoveContainer" containerID="35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397" Jan 29 16:13:50 crc kubenswrapper[4895]: E0129 16:13:50.743721 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-7p5vp_openshift-multus(dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5)\"" pod="openshift-multus/multus-7p5vp" podUID="dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5" Jan 29 16:13:51 crc kubenswrapper[4895]: I0129 16:13:51.036551 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:51 crc kubenswrapper[4895]: I0129 16:13:51.037093 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:51 crc kubenswrapper[4895]: I0129 16:13:51.036666 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:51 crc kubenswrapper[4895]: I0129 16:13:51.036650 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:51 crc kubenswrapper[4895]: E0129 16:13:51.037615 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:51 crc kubenswrapper[4895]: E0129 16:13:51.037842 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:51 crc kubenswrapper[4895]: E0129 16:13:51.037768 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:51 crc kubenswrapper[4895]: E0129 16:13:51.038341 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:51 crc kubenswrapper[4895]: I0129 16:13:51.748925 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7p5vp_dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5/kube-multus/1.log" Jan 29 16:13:53 crc kubenswrapper[4895]: I0129 16:13:53.036361 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:53 crc kubenswrapper[4895]: E0129 16:13:53.036531 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:53 crc kubenswrapper[4895]: I0129 16:13:53.036805 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:53 crc kubenswrapper[4895]: E0129 16:13:53.036903 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:53 crc kubenswrapper[4895]: I0129 16:13:53.037058 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:53 crc kubenswrapper[4895]: E0129 16:13:53.037124 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:53 crc kubenswrapper[4895]: I0129 16:13:53.037376 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:53 crc kubenswrapper[4895]: E0129 16:13:53.037453 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:55 crc kubenswrapper[4895]: I0129 16:13:55.036148 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:55 crc kubenswrapper[4895]: I0129 16:13:55.036228 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:55 crc kubenswrapper[4895]: I0129 16:13:55.036231 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:55 crc kubenswrapper[4895]: I0129 16:13:55.036177 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:55 crc kubenswrapper[4895]: E0129 16:13:55.036364 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:55 crc kubenswrapper[4895]: E0129 16:13:55.036489 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:55 crc kubenswrapper[4895]: E0129 16:13:55.036635 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:55 crc kubenswrapper[4895]: E0129 16:13:55.036747 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:56 crc kubenswrapper[4895]: I0129 16:13:56.037688 4895 scope.go:117] "RemoveContainer" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" Jan 29 16:13:56 crc kubenswrapper[4895]: I0129 16:13:56.770464 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/3.log" Jan 29 16:13:56 crc kubenswrapper[4895]: I0129 16:13:56.774597 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerStarted","Data":"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca"} Jan 29 16:13:56 crc kubenswrapper[4895]: I0129 16:13:56.775106 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:13:56 crc kubenswrapper[4895]: I0129 16:13:56.805724 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podStartSLOduration=100.805699238 podStartE2EDuration="1m40.805699238s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:13:56.804818225 +0000 UTC m=+120.607795509" watchObservedRunningTime="2026-01-29 16:13:56.805699238 +0000 UTC m=+120.608676512" Jan 29 16:13:57 crc kubenswrapper[4895]: E0129 16:13:57.013051 4895 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 16:13:57 crc kubenswrapper[4895]: I0129 16:13:57.036576 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:57 crc kubenswrapper[4895]: E0129 16:13:57.036787 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:57 crc kubenswrapper[4895]: I0129 16:13:57.036949 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:57 crc kubenswrapper[4895]: E0129 16:13:57.039069 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:57 crc kubenswrapper[4895]: I0129 16:13:57.039393 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:57 crc kubenswrapper[4895]: I0129 16:13:57.039442 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:57 crc kubenswrapper[4895]: E0129 16:13:57.039530 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:13:57 crc kubenswrapper[4895]: E0129 16:13:57.039696 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:57 crc kubenswrapper[4895]: I0129 16:13:57.044849 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-h9mkw"] Jan 29 16:13:57 crc kubenswrapper[4895]: E0129 16:13:57.150161 4895 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:13:57 crc kubenswrapper[4895]: I0129 16:13:57.779366 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:57 crc kubenswrapper[4895]: E0129 16:13:57.780059 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:59 crc kubenswrapper[4895]: I0129 16:13:59.036213 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:13:59 crc kubenswrapper[4895]: I0129 16:13:59.036259 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:13:59 crc kubenswrapper[4895]: I0129 16:13:59.036213 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:13:59 crc kubenswrapper[4895]: E0129 16:13:59.036418 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:13:59 crc kubenswrapper[4895]: I0129 16:13:59.036545 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:13:59 crc kubenswrapper[4895]: E0129 16:13:59.036713 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:13:59 crc kubenswrapper[4895]: E0129 16:13:59.036789 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:13:59 crc kubenswrapper[4895]: E0129 16:13:59.036892 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:14:01 crc kubenswrapper[4895]: I0129 16:14:01.036580 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:14:01 crc kubenswrapper[4895]: E0129 16:14:01.036808 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:14:01 crc kubenswrapper[4895]: I0129 16:14:01.037171 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:14:01 crc kubenswrapper[4895]: I0129 16:14:01.037236 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:14:01 crc kubenswrapper[4895]: E0129 16:14:01.037344 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:14:01 crc kubenswrapper[4895]: I0129 16:14:01.037419 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:14:01 crc kubenswrapper[4895]: E0129 16:14:01.037665 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:14:01 crc kubenswrapper[4895]: E0129 16:14:01.037794 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:14:02 crc kubenswrapper[4895]: I0129 16:14:02.037495 4895 scope.go:117] "RemoveContainer" containerID="35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397" Jan 29 16:14:02 crc kubenswrapper[4895]: E0129 16:14:02.151688 4895 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:14:02 crc kubenswrapper[4895]: I0129 16:14:02.800192 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7p5vp_dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5/kube-multus/1.log" Jan 29 16:14:02 crc kubenswrapper[4895]: I0129 16:14:02.800399 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7p5vp" event={"ID":"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5","Type":"ContainerStarted","Data":"660b56274f2e87987653cca9fdc4a251f69f781a488edd0d7c75ff5126604a2d"} Jan 29 16:14:03 crc kubenswrapper[4895]: I0129 16:14:03.035927 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:14:03 crc kubenswrapper[4895]: I0129 16:14:03.035938 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:14:03 crc kubenswrapper[4895]: I0129 16:14:03.035991 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:14:03 crc kubenswrapper[4895]: E0129 16:14:03.036751 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:14:03 crc kubenswrapper[4895]: E0129 16:14:03.036541 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:14:03 crc kubenswrapper[4895]: E0129 16:14:03.036663 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:14:03 crc kubenswrapper[4895]: I0129 16:14:03.036007 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:14:03 crc kubenswrapper[4895]: E0129 16:14:03.036933 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:14:05 crc kubenswrapper[4895]: I0129 16:14:05.036616 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:14:05 crc kubenswrapper[4895]: I0129 16:14:05.036715 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:14:05 crc kubenswrapper[4895]: I0129 16:14:05.036626 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:14:05 crc kubenswrapper[4895]: I0129 16:14:05.036842 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:14:05 crc kubenswrapper[4895]: E0129 16:14:05.036858 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:14:05 crc kubenswrapper[4895]: E0129 16:14:05.036999 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:14:05 crc kubenswrapper[4895]: E0129 16:14:05.037081 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:14:05 crc kubenswrapper[4895]: E0129 16:14:05.037296 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:14:07 crc kubenswrapper[4895]: I0129 16:14:07.035887 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:14:07 crc kubenswrapper[4895]: I0129 16:14:07.035980 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:14:07 crc kubenswrapper[4895]: E0129 16:14:07.037500 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:14:07 crc kubenswrapper[4895]: I0129 16:14:07.037660 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:14:07 crc kubenswrapper[4895]: E0129 16:14:07.038039 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:14:07 crc kubenswrapper[4895]: I0129 16:14:07.038427 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:14:07 crc kubenswrapper[4895]: E0129 16:14:07.038578 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-h9mkw" podUID="5113e2b8-dc97-42a1-aa1c-3d604cada8c2" Jan 29 16:14:07 crc kubenswrapper[4895]: E0129 16:14:07.038928 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:14:09 crc kubenswrapper[4895]: I0129 16:14:09.036527 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:14:09 crc kubenswrapper[4895]: I0129 16:14:09.036560 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:14:09 crc kubenswrapper[4895]: I0129 16:14:09.036596 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:14:09 crc kubenswrapper[4895]: I0129 16:14:09.036853 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:14:09 crc kubenswrapper[4895]: I0129 16:14:09.041004 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 16:14:09 crc kubenswrapper[4895]: I0129 16:14:09.043064 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 16:14:09 crc kubenswrapper[4895]: I0129 16:14:09.043093 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 16:14:09 crc kubenswrapper[4895]: I0129 16:14:09.043207 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 16:14:09 crc kubenswrapper[4895]: I0129 16:14:09.043502 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 16:14:09 crc kubenswrapper[4895]: I0129 16:14:09.043687 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 16:14:10 crc kubenswrapper[4895]: I0129 16:14:10.703641 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:14:14 crc kubenswrapper[4895]: I0129 16:14:14.953785 4895 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.006067 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.006672 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.009310 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r8fhh"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.009606 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6bcff"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.010055 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.010291 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cv44f"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.010743 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.011258 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.012639 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.013663 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.013679 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.014762 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.015171 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.015430 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.016276 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.021341 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-t675c"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.022214 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.022797 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bq54w"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.022939 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t675c" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.023153 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.022805 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.023073 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.023694 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.023956 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.024130 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.024260 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.024371 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.024392 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.023255 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.024472 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.023323 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.024715 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.024806 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.024942 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.025222 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.025294 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.027001 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qt6sq"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.025440 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.027337 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.028746 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.034226 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.034433 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.034671 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.034957 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.035169 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4n52t"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.035775 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-p6ck2"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.036249 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.036292 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.036441 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.036527 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.038885 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.043610 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.043832 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.044287 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.044543 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.044740 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.044959 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.045041 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.045096 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.045226 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.045297 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.045316 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.045488 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.045528 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.045603 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.045748 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.044978 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.044978 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.046106 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.070414 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.070619 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.071274 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.071435 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.071803 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.072282 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.072328 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.072706 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.075334 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.085403 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.102013 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.102572 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9v2kn"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.103327 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.103722 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.104847 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.106275 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.107113 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.107364 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.107727 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.112798 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.112932 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.115094 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.116986 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.117129 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.117240 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.117736 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.117792 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.117949 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.117901 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.118002 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.118806 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-s5x8l"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.119314 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pmkl9"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.119778 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.120237 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-snck7"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.120801 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.121448 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.121953 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.122325 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.122780 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.118923 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.122977 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.123917 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.123964 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.124017 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.124060 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.126441 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.128970 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141427 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-service-ca\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141472 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-config\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141509 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3acaf5b1-0f37-4157-85e0-926718993903-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141539 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbzhl\" (UniqueName: \"kubernetes.io/projected/a74a90ae-daab-4183-88d5-4cb49b9ed96e-kube-api-access-fbzhl\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141564 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-serving-cert\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141580 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9ef793-c9ca-4c0a-9ab0-09115c564646-config\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141596 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/84c1ec53-3121-4cd0-9665-4c8a43864032-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141613 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c9ef793-c9ca-4c0a-9ab0-09115c564646-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141635 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01f63105-32cc-4bc3-a677-8d0a7d967af2-encryption-config\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141650 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3acaf5b1-0f37-4157-85e0-926718993903-audit-dir\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141669 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324d5ea2-a2d9-4001-8892-3aa92d6e323e-config\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141684 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlrvc\" (UniqueName: \"kubernetes.io/projected/9dda0f03-a7e0-442d-b684-9b6b5a1885ab-kube-api-access-tlrvc\") pod \"downloads-7954f5f757-t675c\" (UID: \"9dda0f03-a7e0-442d-b684-9b6b5a1885ab\") " pod="openshift-console/downloads-7954f5f757-t675c" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141702 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/324d5ea2-a2d9-4001-8892-3aa92d6e323e-etcd-service-ca\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141719 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8098be9-256f-478d-a218-3e74a5ef8ca9-service-ca-bundle\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141734 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84c1ec53-3121-4cd0-9665-4c8a43864032-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141750 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141768 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3acaf5b1-0f37-4157-85e0-926718993903-encryption-config\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141786 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-config\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141802 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6298c\" (UniqueName: \"kubernetes.io/projected/324d5ea2-a2d9-4001-8892-3aa92d6e323e-kube-api-access-6298c\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141819 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141835 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-oauth-serving-cert\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141852 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4jdh\" (UniqueName: \"kubernetes.io/projected/01e50c09-6efb-44d3-8919-dcea4b9f6a72-kube-api-access-h4jdh\") pod \"dns-operator-744455d44c-qt6sq\" (UID: \"01e50c09-6efb-44d3-8919-dcea4b9f6a72\") " pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141883 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-audit\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141908 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-config\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141925 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74a90ae-daab-4183-88d5-4cb49b9ed96e-config\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141940 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-image-import-ca\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141956 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-client-ca\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141972 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-config\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.141989 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3acaf5b1-0f37-4157-85e0-926718993903-etcd-client\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142008 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/324d5ea2-a2d9-4001-8892-3aa92d6e323e-etcd-ca\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142024 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2c9ef793-c9ca-4c0a-9ab0-09115c564646-images\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142041 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qznrs\" (UniqueName: \"kubernetes.io/projected/01f63105-32cc-4bc3-a677-8d0a7d967af2-kube-api-access-qznrs\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142059 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d07af33-8811-4683-883c-6e20ef713ea4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mzj8m\" (UID: \"9d07af33-8811-4683-883c-6e20ef713ea4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142074 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc82m\" (UniqueName: \"kubernetes.io/projected/2c9ef793-c9ca-4c0a-9ab0-09115c564646-kube-api-access-jc82m\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142089 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-etcd-serving-ca\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142103 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01f63105-32cc-4bc3-a677-8d0a7d967af2-serving-cert\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142127 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01f63105-32cc-4bc3-a677-8d0a7d967af2-audit-dir\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142142 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8098be9-256f-478d-a218-3e74a5ef8ca9-serving-cert\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142159 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6wp6\" (UniqueName: \"kubernetes.io/projected/9d07af33-8811-4683-883c-6e20ef713ea4-kube-api-access-p6wp6\") pod \"openshift-config-operator-7777fb866f-mzj8m\" (UID: \"9d07af33-8811-4683-883c-6e20ef713ea4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142178 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c99b9a-0b66-49b1-8d55-72a6bea92059-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kq9bd\" (UID: \"a4c99b9a-0b66-49b1-8d55-72a6bea92059\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142210 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjzg6\" (UniqueName: \"kubernetes.io/projected/b8098be9-256f-478d-a218-3e74a5ef8ca9-kube-api-access-bjzg6\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142237 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z69p9\" (UniqueName: \"kubernetes.io/projected/84c1ec53-3121-4cd0-9665-4c8a43864032-kube-api-access-z69p9\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142261 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/324d5ea2-a2d9-4001-8892-3aa92d6e323e-serving-cert\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142282 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d07af33-8811-4683-883c-6e20ef713ea4-serving-cert\") pod \"openshift-config-operator-7777fb866f-mzj8m\" (UID: \"9d07af33-8811-4683-883c-6e20ef713ea4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142296 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-serving-cert\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142320 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53da942c-31af-4b5b-9e63-4e53147ad257-serving-cert\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142336 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a74a90ae-daab-4183-88d5-4cb49b9ed96e-machine-approver-tls\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142354 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqhnm\" (UniqueName: \"kubernetes.io/projected/a4c99b9a-0b66-49b1-8d55-72a6bea92059-kube-api-access-xqhnm\") pod \"openshift-apiserver-operator-796bbdcf4f-kq9bd\" (UID: \"a4c99b9a-0b66-49b1-8d55-72a6bea92059\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142373 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3acaf5b1-0f37-4157-85e0-926718993903-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142392 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z4xm\" (UniqueName: \"kubernetes.io/projected/164e7cd8-e106-4eab-81ea-bc00c73daf2a-kube-api-access-2z4xm\") pod \"cluster-samples-operator-665b6dd947-4xrqk\" (UID: \"164e7cd8-e106-4eab-81ea-bc00c73daf2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142409 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/01f63105-32cc-4bc3-a677-8d0a7d967af2-node-pullsecrets\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142425 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nclcx\" (UniqueName: \"kubernetes.io/projected/6dd34441-4294-4e90-9f2d-909c5aecdff7-kube-api-access-nclcx\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142441 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01e50c09-6efb-44d3-8919-dcea4b9f6a72-metrics-tls\") pod \"dns-operator-744455d44c-qt6sq\" (UID: \"01e50c09-6efb-44d3-8919-dcea4b9f6a72\") " pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142456 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a74a90ae-daab-4183-88d5-4cb49b9ed96e-auth-proxy-config\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142478 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3acaf5b1-0f37-4157-85e0-926718993903-serving-cert\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142509 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4c99b9a-0b66-49b1-8d55-72a6bea92059-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kq9bd\" (UID: \"a4c99b9a-0b66-49b1-8d55-72a6bea92059\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142535 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-oauth-config\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142550 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-trusted-ca-bundle\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142565 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-client-ca\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142581 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01f63105-32cc-4bc3-a677-8d0a7d967af2-etcd-client\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142607 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3acaf5b1-0f37-4157-85e0-926718993903-audit-policies\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142636 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/324d5ea2-a2d9-4001-8892-3aa92d6e323e-etcd-client\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142660 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz89h\" (UniqueName: \"kubernetes.io/projected/53da942c-31af-4b5b-9e63-4e53147ad257-kube-api-access-jz89h\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142683 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfgc\" (UniqueName: \"kubernetes.io/projected/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-kube-api-access-4qfgc\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142704 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8098be9-256f-478d-a218-3e74a5ef8ca9-config\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142721 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84c1ec53-3121-4cd0-9665-4c8a43864032-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142738 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6px6\" (UniqueName: \"kubernetes.io/projected/3acaf5b1-0f37-4157-85e0-926718993903-kube-api-access-c6px6\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142759 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/164e7cd8-e106-4eab-81ea-bc00c73daf2a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4xrqk\" (UID: \"164e7cd8-e106-4eab-81ea-bc00c73daf2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.142780 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8098be9-256f-478d-a218-3e74a5ef8ca9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.145672 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.146320 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.146566 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.147133 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r8fhh"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.147166 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jjgzj"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.147629 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.148384 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.148697 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cv44f"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.148787 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.149120 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.149265 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.149414 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.149587 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.157486 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.158853 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.159717 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.160565 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.160815 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.161787 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pcdgm"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.162319 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.169786 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.170109 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.171129 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.172145 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.172802 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.173171 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.173443 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.173764 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.173989 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.174227 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.174370 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.174481 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.174635 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.174248 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.175119 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.174281 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.175409 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j8jdx"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.175960 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.176418 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.181196 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.181914 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.182163 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.184515 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xc67m"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.184816 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.185442 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.185737 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.190989 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.191268 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.197996 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.201030 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.202132 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.202312 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.202421 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.202836 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.203459 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.203780 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.208073 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.208550 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.214454 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.215323 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.218449 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.219787 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.230228 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.230731 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.232021 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.235990 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6bcff"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.236067 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.236084 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-snck7"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.241400 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.242455 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.244312 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.244765 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.248088 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.248239 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.248329 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.248442 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.249618 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t675c"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.252082 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254257 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6px6\" (UniqueName: \"kubernetes.io/projected/3acaf5b1-0f37-4157-85e0-926718993903-kube-api-access-c6px6\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254302 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz89h\" (UniqueName: \"kubernetes.io/projected/53da942c-31af-4b5b-9e63-4e53147ad257-kube-api-access-jz89h\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254332 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qfgc\" (UniqueName: \"kubernetes.io/projected/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-kube-api-access-4qfgc\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254355 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8098be9-256f-478d-a218-3e74a5ef8ca9-config\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254375 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84c1ec53-3121-4cd0-9665-4c8a43864032-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254392 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/164e7cd8-e106-4eab-81ea-bc00c73daf2a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4xrqk\" (UID: \"164e7cd8-e106-4eab-81ea-bc00c73daf2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254408 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8098be9-256f-478d-a218-3e74a5ef8ca9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254429 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-service-ca\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254449 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9ef793-c9ca-4c0a-9ab0-09115c564646-config\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254470 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-config\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254489 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3acaf5b1-0f37-4157-85e0-926718993903-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254508 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbzhl\" (UniqueName: \"kubernetes.io/projected/a74a90ae-daab-4183-88d5-4cb49b9ed96e-kube-api-access-fbzhl\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254529 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-serving-cert\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254546 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/84c1ec53-3121-4cd0-9665-4c8a43864032-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254563 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c9ef793-c9ca-4c0a-9ab0-09115c564646-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254584 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01f63105-32cc-4bc3-a677-8d0a7d967af2-encryption-config\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254601 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3acaf5b1-0f37-4157-85e0-926718993903-audit-dir\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254620 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324d5ea2-a2d9-4001-8892-3aa92d6e323e-config\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254637 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8098be9-256f-478d-a218-3e74a5ef8ca9-service-ca-bundle\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254653 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlrvc\" (UniqueName: \"kubernetes.io/projected/9dda0f03-a7e0-442d-b684-9b6b5a1885ab-kube-api-access-tlrvc\") pod \"downloads-7954f5f757-t675c\" (UID: \"9dda0f03-a7e0-442d-b684-9b6b5a1885ab\") " pod="openshift-console/downloads-7954f5f757-t675c" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254672 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/324d5ea2-a2d9-4001-8892-3aa92d6e323e-etcd-service-ca\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254696 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84c1ec53-3121-4cd0-9665-4c8a43864032-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254716 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254738 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3acaf5b1-0f37-4157-85e0-926718993903-encryption-config\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254759 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-config\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254777 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6298c\" (UniqueName: \"kubernetes.io/projected/324d5ea2-a2d9-4001-8892-3aa92d6e323e-kube-api-access-6298c\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254794 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254808 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-config\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254823 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-oauth-serving-cert\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254841 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4jdh\" (UniqueName: \"kubernetes.io/projected/01e50c09-6efb-44d3-8919-dcea4b9f6a72-kube-api-access-h4jdh\") pod \"dns-operator-744455d44c-qt6sq\" (UID: \"01e50c09-6efb-44d3-8919-dcea4b9f6a72\") " pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254854 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-audit\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254900 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-client-ca\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254920 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74a90ae-daab-4183-88d5-4cb49b9ed96e-config\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254939 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-image-import-ca\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254957 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-config\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254978 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3acaf5b1-0f37-4157-85e0-926718993903-etcd-client\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.254995 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qznrs\" (UniqueName: \"kubernetes.io/projected/01f63105-32cc-4bc3-a677-8d0a7d967af2-kube-api-access-qznrs\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255012 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/324d5ea2-a2d9-4001-8892-3aa92d6e323e-etcd-ca\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255030 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2c9ef793-c9ca-4c0a-9ab0-09115c564646-images\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255045 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc82m\" (UniqueName: \"kubernetes.io/projected/2c9ef793-c9ca-4c0a-9ab0-09115c564646-kube-api-access-jc82m\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255065 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d07af33-8811-4683-883c-6e20ef713ea4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mzj8m\" (UID: \"9d07af33-8811-4683-883c-6e20ef713ea4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255082 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-etcd-serving-ca\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255098 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01f63105-32cc-4bc3-a677-8d0a7d967af2-serving-cert\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255124 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01f63105-32cc-4bc3-a677-8d0a7d967af2-audit-dir\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255141 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c99b9a-0b66-49b1-8d55-72a6bea92059-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kq9bd\" (UID: \"a4c99b9a-0b66-49b1-8d55-72a6bea92059\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255159 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8098be9-256f-478d-a218-3e74a5ef8ca9-serving-cert\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255176 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6wp6\" (UniqueName: \"kubernetes.io/projected/9d07af33-8811-4683-883c-6e20ef713ea4-kube-api-access-p6wp6\") pod \"openshift-config-operator-7777fb866f-mzj8m\" (UID: \"9d07af33-8811-4683-883c-6e20ef713ea4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255196 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjzg6\" (UniqueName: \"kubernetes.io/projected/b8098be9-256f-478d-a218-3e74a5ef8ca9-kube-api-access-bjzg6\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255210 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d07af33-8811-4683-883c-6e20ef713ea4-serving-cert\") pod \"openshift-config-operator-7777fb866f-mzj8m\" (UID: \"9d07af33-8811-4683-883c-6e20ef713ea4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255226 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z69p9\" (UniqueName: \"kubernetes.io/projected/84c1ec53-3121-4cd0-9665-4c8a43864032-kube-api-access-z69p9\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255243 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/324d5ea2-a2d9-4001-8892-3aa92d6e323e-serving-cert\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255243 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8098be9-256f-478d-a218-3e74a5ef8ca9-config\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255502 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255325 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-serving-cert\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255588 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53da942c-31af-4b5b-9e63-4e53147ad257-serving-cert\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255614 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a74a90ae-daab-4183-88d5-4cb49b9ed96e-machine-approver-tls\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255643 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqhnm\" (UniqueName: \"kubernetes.io/projected/a4c99b9a-0b66-49b1-8d55-72a6bea92059-kube-api-access-xqhnm\") pod \"openshift-apiserver-operator-796bbdcf4f-kq9bd\" (UID: \"a4c99b9a-0b66-49b1-8d55-72a6bea92059\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255669 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3acaf5b1-0f37-4157-85e0-926718993903-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255692 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/01f63105-32cc-4bc3-a677-8d0a7d967af2-node-pullsecrets\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255716 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z4xm\" (UniqueName: \"kubernetes.io/projected/164e7cd8-e106-4eab-81ea-bc00c73daf2a-kube-api-access-2z4xm\") pod \"cluster-samples-operator-665b6dd947-4xrqk\" (UID: \"164e7cd8-e106-4eab-81ea-bc00c73daf2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255740 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nclcx\" (UniqueName: \"kubernetes.io/projected/6dd34441-4294-4e90-9f2d-909c5aecdff7-kube-api-access-nclcx\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255763 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01e50c09-6efb-44d3-8919-dcea4b9f6a72-metrics-tls\") pod \"dns-operator-744455d44c-qt6sq\" (UID: \"01e50c09-6efb-44d3-8919-dcea4b9f6a72\") " pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255796 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a74a90ae-daab-4183-88d5-4cb49b9ed96e-auth-proxy-config\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255832 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3acaf5b1-0f37-4157-85e0-926718993903-serving-cert\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255848 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4c99b9a-0b66-49b1-8d55-72a6bea92059-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kq9bd\" (UID: \"a4c99b9a-0b66-49b1-8d55-72a6bea92059\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255897 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-oauth-config\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255914 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-trusted-ca-bundle\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255933 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/324d5ea2-a2d9-4001-8892-3aa92d6e323e-etcd-client\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255952 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-client-ca\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255971 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01f63105-32cc-4bc3-a677-8d0a7d967af2-etcd-client\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.255990 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3acaf5b1-0f37-4157-85e0-926718993903-audit-policies\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.256152 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bq54w"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.256548 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3acaf5b1-0f37-4157-85e0-926718993903-audit-policies\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.257066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2c9ef793-c9ca-4c0a-9ab0-09115c564646-images\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.257617 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-client-ca\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.257659 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74a90ae-daab-4183-88d5-4cb49b9ed96e-config\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.258635 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-audit\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.259794 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-etcd-serving-ca\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.261700 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-image-import-ca\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.262223 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3acaf5b1-0f37-4157-85e0-926718993903-etcd-client\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.263083 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-config\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.263643 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4n52t"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.269521 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.269542 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.269554 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9v2kn"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.265835 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c99b9a-0b66-49b1-8d55-72a6bea92059-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kq9bd\" (UID: \"a4c99b9a-0b66-49b1-8d55-72a6bea92059\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.266265 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/324d5ea2-a2d9-4001-8892-3aa92d6e323e-config\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.266398 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8098be9-256f-478d-a218-3e74a5ef8ca9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.267314 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-oauth-serving-cert\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.263777 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/164e7cd8-e106-4eab-81ea-bc00c73daf2a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4xrqk\" (UID: \"164e7cd8-e106-4eab-81ea-bc00c73daf2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.263812 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3acaf5b1-0f37-4157-85e0-926718993903-audit-dir\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.269645 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.270615 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.265738 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/324d5ea2-a2d9-4001-8892-3aa92d6e323e-etcd-ca\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.271121 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.272740 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.272937 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-service-ca\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.273012 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01f63105-32cc-4bc3-a677-8d0a7d967af2-encryption-config\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.273485 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a74a90ae-daab-4183-88d5-4cb49b9ed96e-machine-approver-tls\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.274018 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8098be9-256f-478d-a218-3e74a5ef8ca9-serving-cert\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.274170 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.274853 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a74a90ae-daab-4183-88d5-4cb49b9ed96e-auth-proxy-config\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.275299 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3acaf5b1-0f37-4157-85e0-926718993903-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.275354 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/01f63105-32cc-4bc3-a677-8d0a7d967af2-node-pullsecrets\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.278584 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8098be9-256f-478d-a218-3e74a5ef8ca9-service-ca-bundle\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.279273 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c9ef793-c9ca-4c0a-9ab0-09115c564646-config\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.280189 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.281328 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84c1ec53-3121-4cd0-9665-4c8a43864032-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.282601 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-config\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.282726 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-serving-cert\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.282812 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01f63105-32cc-4bc3-a677-8d0a7d967af2-audit-dir\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.284199 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d07af33-8811-4683-883c-6e20ef713ea4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mzj8m\" (UID: \"9d07af33-8811-4683-883c-6e20ef713ea4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.284783 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.284887 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-config\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.285484 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-client-ca\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.286302 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3acaf5b1-0f37-4157-85e0-926718993903-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.286566 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-trusted-ca-bundle\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.287882 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f63105-32cc-4bc3-a677-8d0a7d967af2-config\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.288286 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.288793 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/324d5ea2-a2d9-4001-8892-3aa92d6e323e-etcd-service-ca\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.291138 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.291505 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53da942c-31af-4b5b-9e63-4e53147ad257-serving-cert\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.291952 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d07af33-8811-4683-883c-6e20ef713ea4-serving-cert\") pod \"openshift-config-operator-7777fb866f-mzj8m\" (UID: \"9d07af33-8811-4683-883c-6e20ef713ea4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.292128 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2c9ef793-c9ca-4c0a-9ab0-09115c564646-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.292455 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.293009 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4c99b9a-0b66-49b1-8d55-72a6bea92059-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kq9bd\" (UID: \"a4c99b9a-0b66-49b1-8d55-72a6bea92059\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.293074 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/324d5ea2-a2d9-4001-8892-3aa92d6e323e-serving-cert\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.296099 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wb4kv"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.306104 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-oauth-config\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.306522 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/01e50c09-6efb-44d3-8919-dcea4b9f6a72-metrics-tls\") pod \"dns-operator-744455d44c-qt6sq\" (UID: \"01e50c09-6efb-44d3-8919-dcea4b9f6a72\") " pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.307300 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01f63105-32cc-4bc3-a677-8d0a7d967af2-serving-cert\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.307544 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/84c1ec53-3121-4cd0-9665-4c8a43864032-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.307739 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3acaf5b1-0f37-4157-85e0-926718993903-serving-cert\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.307785 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/324d5ea2-a2d9-4001-8892-3aa92d6e323e-etcd-client\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.308037 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-serving-cert\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.311262 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jjgzj"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.311395 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-p6ck2"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.311690 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.312124 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3acaf5b1-0f37-4157-85e0-926718993903-encryption-config\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.313601 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01f63105-32cc-4bc3-a677-8d0a7d967af2-etcd-client\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.317741 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.319467 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.328978 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.329316 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.330744 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.333091 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.335442 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.337903 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.339254 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xc67m"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.341644 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pmkl9"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.343367 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.345531 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qt6sq"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.346386 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.347661 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.348343 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.350424 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.352498 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wb4kv"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.353989 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.355078 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j8jdx"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.365662 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.366990 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pcdgm"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.368327 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5tcbt"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.368813 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.369429 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-22gbx"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.369555 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5tcbt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.370437 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5tcbt"] Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.370493 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-22gbx" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.389800 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.409122 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.432566 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.449078 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.469799 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.489315 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.508440 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.529089 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.549021 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.569555 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.589202 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.609733 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.629746 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.648144 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.676628 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.690060 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.709260 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.729996 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.749834 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.769693 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.790973 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.809632 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.829695 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.850175 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.870256 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.890465 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.917768 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.930089 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.949613 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.969909 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 16:14:15 crc kubenswrapper[4895]: I0129 16:14:15.990404 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.009974 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.030348 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.050604 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.069934 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.089615 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.109773 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.132342 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.149017 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.167521 4895 request.go:700] Waited for 1.017632383s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.170048 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.189711 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.209743 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.229750 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.250755 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.270707 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.289016 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.309647 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.330448 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.370834 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.390290 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.409055 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.429071 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.449683 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.469397 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.489685 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.509762 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.537083 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.549025 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.571168 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.589820 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.609955 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.629665 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.649338 4895 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.670824 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.690399 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.751668 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6298c\" (UniqueName: \"kubernetes.io/projected/324d5ea2-a2d9-4001-8892-3aa92d6e323e-kube-api-access-6298c\") pod \"etcd-operator-b45778765-4n52t\" (UID: \"324d5ea2-a2d9-4001-8892-3aa92d6e323e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.772725 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6px6\" (UniqueName: \"kubernetes.io/projected/3acaf5b1-0f37-4157-85e0-926718993903-kube-api-access-c6px6\") pod \"apiserver-7bbb656c7d-fjk5z\" (UID: \"3acaf5b1-0f37-4157-85e0-926718993903\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.792762 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84c1ec53-3121-4cd0-9665-4c8a43864032-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.812904 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz89h\" (UniqueName: \"kubernetes.io/projected/53da942c-31af-4b5b-9e63-4e53147ad257-kube-api-access-jz89h\") pod \"controller-manager-879f6c89f-r8fhh\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.829189 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qfgc\" (UniqueName: \"kubernetes.io/projected/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-kube-api-access-4qfgc\") pod \"route-controller-manager-6576b87f9c-z4xsp\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.831854 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.846634 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4jdh\" (UniqueName: \"kubernetes.io/projected/01e50c09-6efb-44d3-8919-dcea4b9f6a72-kube-api-access-h4jdh\") pod \"dns-operator-744455d44c-qt6sq\" (UID: \"01e50c09-6efb-44d3-8919-dcea4b9f6a72\") " pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.848300 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.866837 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qznrs\" (UniqueName: \"kubernetes.io/projected/01f63105-32cc-4bc3-a677-8d0a7d967af2-kube-api-access-qznrs\") pod \"apiserver-76f77b778f-6bcff\" (UID: \"01f63105-32cc-4bc3-a677-8d0a7d967af2\") " pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.886035 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc82m\" (UniqueName: \"kubernetes.io/projected/2c9ef793-c9ca-4c0a-9ab0-09115c564646-kube-api-access-jc82m\") pod \"machine-api-operator-5694c8668f-cv44f\" (UID: \"2c9ef793-c9ca-4c0a-9ab0-09115c564646\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.912544 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6wp6\" (UniqueName: \"kubernetes.io/projected/9d07af33-8811-4683-883c-6e20ef713ea4-kube-api-access-p6wp6\") pod \"openshift-config-operator-7777fb866f-mzj8m\" (UID: \"9d07af33-8811-4683-883c-6e20ef713ea4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.913044 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.933164 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.936661 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjzg6\" (UniqueName: \"kubernetes.io/projected/b8098be9-256f-478d-a218-3e74a5ef8ca9-kube-api-access-bjzg6\") pod \"authentication-operator-69f744f599-bq54w\" (UID: \"b8098be9-256f-478d-a218-3e74a5ef8ca9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.936956 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.950237 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.970079 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.970138 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 16:14:16 crc kubenswrapper[4895]: I0129 16:14:16.990254 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.009418 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.049149 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqhnm\" (UniqueName: \"kubernetes.io/projected/a4c99b9a-0b66-49b1-8d55-72a6bea92059-kube-api-access-xqhnm\") pod \"openshift-apiserver-operator-796bbdcf4f-kq9bd\" (UID: \"a4c99b9a-0b66-49b1-8d55-72a6bea92059\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.066130 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z4xm\" (UniqueName: \"kubernetes.io/projected/164e7cd8-e106-4eab-81ea-bc00c73daf2a-kube-api-access-2z4xm\") pod \"cluster-samples-operator-665b6dd947-4xrqk\" (UID: \"164e7cd8-e106-4eab-81ea-bc00c73daf2a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.075385 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.079844 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.092647 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nclcx\" (UniqueName: \"kubernetes.io/projected/6dd34441-4294-4e90-9f2d-909c5aecdff7-kube-api-access-nclcx\") pod \"console-f9d7485db-p6ck2\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:17 crc kubenswrapper[4895]: W0129 16:14:17.104344 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e29d559_3a15_4a8f_9494_6c5d4cf4c642.slice/crio-e6385a9f789d4f55010636af2ae666e0e00bb1234ae3fd362a059bcfd4984cc6 WatchSource:0}: Error finding container e6385a9f789d4f55010636af2ae666e0e00bb1234ae3fd362a059bcfd4984cc6: Status 404 returned error can't find the container with id e6385a9f789d4f55010636af2ae666e0e00bb1234ae3fd362a059bcfd4984cc6 Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.111219 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.119560 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.122464 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z69p9\" (UniqueName: \"kubernetes.io/projected/84c1ec53-3121-4cd0-9665-4c8a43864032-kube-api-access-z69p9\") pod \"cluster-image-registry-operator-dc59b4c8b-4s5g2\" (UID: \"84c1ec53-3121-4cd0-9665-4c8a43864032\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.129673 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.130265 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.139358 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.146016 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.166760 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.172443 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlrvc\" (UniqueName: \"kubernetes.io/projected/9dda0f03-a7e0-442d-b684-9b6b5a1885ab-kube-api-access-tlrvc\") pod \"downloads-7954f5f757-t675c\" (UID: \"9dda0f03-a7e0-442d-b684-9b6b5a1885ab\") " pod="openshift-console/downloads-7954f5f757-t675c" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.187661 4895 request.go:700] Waited for 1.874958507s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.192558 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.200124 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbzhl\" (UniqueName: \"kubernetes.io/projected/a74a90ae-daab-4183-88d5-4cb49b9ed96e-kube-api-access-fbzhl\") pod \"machine-approver-56656f9798-vxd7m\" (UID: \"a74a90ae-daab-4183-88d5-4cb49b9ed96e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.215230 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.229727 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r8fhh"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.232550 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.247489 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.248460 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cv44f"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.249419 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 16:14:17 crc kubenswrapper[4895]: W0129 16:14:17.264303 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53da942c_31af_4b5b_9e63_4e53147ad257.slice/crio-2f8412e439ff1e93a65449219a0d4b00122bb2cd151d75b763d7bccda725ab56 WatchSource:0}: Error finding container 2f8412e439ff1e93a65449219a0d4b00122bb2cd151d75b763d7bccda725ab56: Status 404 returned error can't find the container with id 2f8412e439ff1e93a65449219a0d4b00122bb2cd151d75b763d7bccda725ab56 Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.268756 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.277669 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.291757 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.292848 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.310624 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.318649 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t675c" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.331098 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.354040 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.357977 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.369832 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.365883 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4n52t"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.400483 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.400532 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06e1dd20-1df7-45ad-805b-64235d3521d7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-f945r\" (UID: \"06e1dd20-1df7-45ad-805b-64235d3521d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.400583 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4761e6d8-f523-4c75-bfdb-76a523dfbede-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqxcc\" (UID: \"4761e6d8-f523-4c75-bfdb-76a523dfbede\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.400608 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4761e6d8-f523-4c75-bfdb-76a523dfbede-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqxcc\" (UID: \"4761e6d8-f523-4c75-bfdb-76a523dfbede\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.400649 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-tls\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.400687 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d3c04e5-aec8-4352-9eae-ab6e388931eb-proxy-tls\") pod \"machine-config-controller-84d6567774-7sbh6\" (UID: \"3d3c04e5-aec8-4352-9eae-ab6e388931eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.400742 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d3c04e5-aec8-4352-9eae-ab6e388931eb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7sbh6\" (UID: \"3d3c04e5-aec8-4352-9eae-ab6e388931eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.400830 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-bound-sa-token\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.400896 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06e1dd20-1df7-45ad-805b-64235d3521d7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-f945r\" (UID: \"06e1dd20-1df7-45ad-805b-64235d3521d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.404507 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w6wp\" (UniqueName: \"kubernetes.io/projected/ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98-kube-api-access-6w6wp\") pod \"package-server-manager-789f6589d5-c7csh\" (UID: \"ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.404769 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-dir\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.404821 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-metrics-tls\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.404847 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.404968 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-trusted-ca\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405052 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405103 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-policies\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405125 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405172 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405198 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-trusted-ca\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405247 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405274 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9414a16a-b347-4f0e-af41-9ff94ea7cc05-serving-cert\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405299 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6sc6\" (UniqueName: \"kubernetes.io/projected/3d3c04e5-aec8-4352-9eae-ab6e388931eb-kube-api-access-q6sc6\") pod \"machine-config-controller-84d6567774-7sbh6\" (UID: \"3d3c04e5-aec8-4352-9eae-ab6e388931eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405383 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4761e6d8-f523-4c75-bfdb-76a523dfbede-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqxcc\" (UID: \"4761e6d8-f523-4c75-bfdb-76a523dfbede\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405486 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a22956a0-592e-4592-9237-6c42acfa8b56-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r7srz\" (UID: \"a22956a0-592e-4592-9237-6c42acfa8b56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405512 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2pkk\" (UniqueName: \"kubernetes.io/projected/85eba540-f075-4ac1-9667-f462b08ebefa-kube-api-access-n2pkk\") pod \"migrator-59844c95c7-g9q58\" (UID: \"85eba540-f075-4ac1-9667-f462b08ebefa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405604 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9414a16a-b347-4f0e-af41-9ff94ea7cc05-config\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405631 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9414a16a-b347-4f0e-af41-9ff94ea7cc05-trusted-ca\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405672 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.405696 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.409241 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7bcx\" (UniqueName: \"kubernetes.io/projected/dced6426-2e21-4397-9074-92e06a03409a-kube-api-access-p7bcx\") pod \"openshift-controller-manager-operator-756b6f6bc6-fcftn\" (UID: \"dced6426-2e21-4397-9074-92e06a03409a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.409710 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c7csh\" (UID: \"ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410012 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410075 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bccbee59-7684-41ba-900e-a9e5bf164b80-srv-cert\") pod \"catalog-operator-68c6474976-8xfpj\" (UID: \"bccbee59-7684-41ba-900e-a9e5bf164b80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410412 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-certificates\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410452 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bccbee59-7684-41ba-900e-a9e5bf164b80-profile-collector-cert\") pod \"catalog-operator-68c6474976-8xfpj\" (UID: \"bccbee59-7684-41ba-900e-a9e5bf164b80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410505 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a22956a0-592e-4592-9237-6c42acfa8b56-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r7srz\" (UID: \"a22956a0-592e-4592-9237-6c42acfa8b56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410524 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e1dd20-1df7-45ad-805b-64235d3521d7-config\") pod \"kube-controller-manager-operator-78b949d7b-f945r\" (UID: \"06e1dd20-1df7-45ad-805b-64235d3521d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410571 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410609 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4tp\" (UniqueName: \"kubernetes.io/projected/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-kube-api-access-ct4tp\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410628 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndq8w\" (UniqueName: \"kubernetes.io/projected/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-kube-api-access-ndq8w\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410656 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a22956a0-592e-4592-9237-6c42acfa8b56-config\") pod \"kube-apiserver-operator-766d6c64bb-r7srz\" (UID: \"a22956a0-592e-4592-9237-6c42acfa8b56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410731 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvx9j\" (UniqueName: \"kubernetes.io/projected/9414a16a-b347-4f0e-af41-9ff94ea7cc05-kube-api-access-zvx9j\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410751 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410770 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410827 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dced6426-2e21-4397-9074-92e06a03409a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fcftn\" (UID: \"dced6426-2e21-4397-9074-92e06a03409a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410878 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.410898 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js4b9\" (UniqueName: \"kubernetes.io/projected/bccbee59-7684-41ba-900e-a9e5bf164b80-kube-api-access-js4b9\") pod \"catalog-operator-68c6474976-8xfpj\" (UID: \"bccbee59-7684-41ba-900e-a9e5bf164b80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.412172 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.412266 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dced6426-2e21-4397-9074-92e06a03409a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fcftn\" (UID: \"dced6426-2e21-4397-9074-92e06a03409a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.412334 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.412485 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5hlk\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-kube-api-access-l5hlk\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: E0129 16:14:17.412523 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:17.912508467 +0000 UTC m=+141.715485731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.437595 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.513642 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.513915 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.513952 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76pr9\" (UniqueName: \"kubernetes.io/projected/c00dfac3-8de9-4673-a6b4-2965a204accb-kube-api-access-76pr9\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.513972 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl2gb\" (UniqueName: \"kubernetes.io/projected/8d5c56cc-9bff-4531-b57f-6a14c1d0802a-kube-api-access-rl2gb\") pod \"multus-admission-controller-857f4d67dd-jjgzj\" (UID: \"8d5c56cc-9bff-4531-b57f-6a14c1d0802a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.513996 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514017 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-trusted-ca\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514036 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514053 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9414a16a-b347-4f0e-af41-9ff94ea7cc05-serving-cert\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514075 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6sc6\" (UniqueName: \"kubernetes.io/projected/3d3c04e5-aec8-4352-9eae-ab6e388931eb-kube-api-access-q6sc6\") pod \"machine-config-controller-84d6567774-7sbh6\" (UID: \"3d3c04e5-aec8-4352-9eae-ab6e388931eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514121 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cldws\" (UniqueName: \"kubernetes.io/projected/46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a-kube-api-access-cldws\") pod \"kube-storage-version-migrator-operator-b67b599dd-w4jl6\" (UID: \"46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514153 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72cc9605-664e-4897-93c5-b386c20517d1-metrics-tls\") pod \"dns-default-wb4kv\" (UID: \"72cc9605-664e-4897-93c5-b386c20517d1\") " pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514182 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d08826f7-331b-4ed8-98c2-2faaab99d384-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sx29z\" (UID: \"d08826f7-331b-4ed8-98c2-2faaab99d384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514206 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d705d04a-92ce-41c8-91f8-c035a8dfa98a-apiservice-cert\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514222 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6fe9b00e-af32-4089-8a31-d0c1735d001f-certs\") pod \"machine-config-server-22gbx\" (UID: \"6fe9b00e-af32-4089-8a31-d0c1735d001f\") " pod="openshift-machine-config-operator/machine-config-server-22gbx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514241 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4761e6d8-f523-4c75-bfdb-76a523dfbede-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqxcc\" (UID: \"4761e6d8-f523-4c75-bfdb-76a523dfbede\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514259 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c39ec96-c5ce-40f8-80b5-68baacd59516-stats-auth\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514278 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z7kz\" (UniqueName: \"kubernetes.io/projected/d08826f7-331b-4ed8-98c2-2faaab99d384-kube-api-access-6z7kz\") pod \"olm-operator-6b444d44fb-sx29z\" (UID: \"d08826f7-331b-4ed8-98c2-2faaab99d384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514295 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j8jdx\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514312 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgxw5\" (UniqueName: \"kubernetes.io/projected/249177f9-7b1e-4d39-a400-e625862f53c3-kube-api-access-kgxw5\") pod \"marketplace-operator-79b997595-j8jdx\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514329 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-plugins-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a22956a0-592e-4592-9237-6c42acfa8b56-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r7srz\" (UID: \"a22956a0-592e-4592-9237-6c42acfa8b56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514370 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2pkk\" (UniqueName: \"kubernetes.io/projected/85eba540-f075-4ac1-9667-f462b08ebefa-kube-api-access-n2pkk\") pod \"migrator-59844c95c7-g9q58\" (UID: \"85eba540-f075-4ac1-9667-f462b08ebefa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514388 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j8jdx\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c39ec96-c5ce-40f8-80b5-68baacd59516-default-certificate\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514423 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c39ec96-c5ce-40f8-80b5-68baacd59516-service-ca-bundle\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514441 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjlmr\" (UniqueName: \"kubernetes.io/projected/f2305b0f-2d33-4c71-a687-0533fe97d02d-kube-api-access-hjlmr\") pod \"service-ca-9c57cc56f-pcdgm\" (UID: \"f2305b0f-2d33-4c71-a687-0533fe97d02d\") " pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514460 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b221011-ac95-4cb7-8374-a7b61f91ce72-config\") pod \"service-ca-operator-777779d784-n5vw7\" (UID: \"2b221011-ac95-4cb7-8374-a7b61f91ce72\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514480 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9414a16a-b347-4f0e-af41-9ff94ea7cc05-config\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514501 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9414a16a-b347-4f0e-af41-9ff94ea7cc05-trusted-ca\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514525 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-socket-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514549 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514571 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514596 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7bcx\" (UniqueName: \"kubernetes.io/projected/dced6426-2e21-4397-9074-92e06a03409a-kube-api-access-p7bcx\") pod \"openshift-controller-manager-operator-756b6f6bc6-fcftn\" (UID: \"dced6426-2e21-4397-9074-92e06a03409a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514616 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-registration-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514633 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-csi-data-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514652 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c7csh\" (UID: \"ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514671 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c39ec96-c5ce-40f8-80b5-68baacd59516-metrics-certs\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514688 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzm4n\" (UniqueName: \"kubernetes.io/projected/c2b62e00-db4e-4ce9-8dd1-717159043f83-kube-api-access-pzm4n\") pod \"control-plane-machine-set-operator-78cbb6b69f-dp6dw\" (UID: \"c2b62e00-db4e-4ce9-8dd1-717159043f83\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514705 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514725 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bccbee59-7684-41ba-900e-a9e5bf164b80-srv-cert\") pod \"catalog-operator-68c6474976-8xfpj\" (UID: \"bccbee59-7684-41ba-900e-a9e5bf164b80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514744 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/253b52d8-4b46-4b35-a97d-8dd6f1a00cb0-cert\") pod \"ingress-canary-5tcbt\" (UID: \"253b52d8-4b46-4b35-a97d-8dd6f1a00cb0\") " pod="openshift-ingress-canary/ingress-canary-5tcbt" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514761 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-certificates\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514778 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bccbee59-7684-41ba-900e-a9e5bf164b80-profile-collector-cert\") pod \"catalog-operator-68c6474976-8xfpj\" (UID: \"bccbee59-7684-41ba-900e-a9e5bf164b80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514803 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a22956a0-592e-4592-9237-6c42acfa8b56-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r7srz\" (UID: \"a22956a0-592e-4592-9237-6c42acfa8b56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514819 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e1dd20-1df7-45ad-805b-64235d3521d7-config\") pod \"kube-controller-manager-operator-78b949d7b-f945r\" (UID: \"06e1dd20-1df7-45ad-805b-64235d3521d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514839 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6fe9b00e-af32-4089-8a31-d0c1735d001f-node-bootstrap-token\") pod \"machine-config-server-22gbx\" (UID: \"6fe9b00e-af32-4089-8a31-d0c1735d001f\") " pod="openshift-machine-config-operator/machine-config-server-22gbx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514911 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96f8638c-32af-4ed6-87de-790aa042e15c-proxy-tls\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514933 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514950 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4tp\" (UniqueName: \"kubernetes.io/projected/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-kube-api-access-ct4tp\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514966 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndq8w\" (UniqueName: \"kubernetes.io/projected/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-kube-api-access-ndq8w\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.514986 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grgq8\" (UniqueName: \"kubernetes.io/projected/253b52d8-4b46-4b35-a97d-8dd6f1a00cb0-kube-api-access-grgq8\") pod \"ingress-canary-5tcbt\" (UID: \"253b52d8-4b46-4b35-a97d-8dd6f1a00cb0\") " pod="openshift-ingress-canary/ingress-canary-5tcbt" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515006 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a22956a0-592e-4592-9237-6c42acfa8b56-config\") pod \"kube-apiserver-operator-766d6c64bb-r7srz\" (UID: \"a22956a0-592e-4592-9237-6c42acfa8b56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515025 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvx9j\" (UniqueName: \"kubernetes.io/projected/9414a16a-b347-4f0e-af41-9ff94ea7cc05-kube-api-access-zvx9j\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515045 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da56ae41-00cb-4345-a6be-1ceb542b8afe-config-volume\") pod \"collect-profiles-29495040-q8q6g\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515063 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da56ae41-00cb-4345-a6be-1ceb542b8afe-secret-volume\") pod \"collect-profiles-29495040-q8q6g\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515080 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515100 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515116 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dced6426-2e21-4397-9074-92e06a03409a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fcftn\" (UID: \"dced6426-2e21-4397-9074-92e06a03409a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515144 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js4b9\" (UniqueName: \"kubernetes.io/projected/bccbee59-7684-41ba-900e-a9e5bf164b80-kube-api-access-js4b9\") pod \"catalog-operator-68c6474976-8xfpj\" (UID: \"bccbee59-7684-41ba-900e-a9e5bf164b80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515161 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sgfq\" (UniqueName: \"kubernetes.io/projected/2b221011-ac95-4cb7-8374-a7b61f91ce72-kube-api-access-7sgfq\") pod \"service-ca-operator-777779d784-n5vw7\" (UID: \"2b221011-ac95-4cb7-8374-a7b61f91ce72\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515179 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515198 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrdkd\" (UniqueName: \"kubernetes.io/projected/da56ae41-00cb-4345-a6be-1ceb542b8afe-kube-api-access-nrdkd\") pod \"collect-profiles-29495040-q8q6g\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515213 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d705d04a-92ce-41c8-91f8-c035a8dfa98a-tmpfs\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515232 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dced6426-2e21-4397-9074-92e06a03409a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fcftn\" (UID: \"dced6426-2e21-4397-9074-92e06a03409a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515266 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-mountpoint-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515283 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515302 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5hlk\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-kube-api-access-l5hlk\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515318 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/96f8638c-32af-4ed6-87de-790aa042e15c-images\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515333 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96f8638c-32af-4ed6-87de-790aa042e15c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515352 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d08826f7-331b-4ed8-98c2-2faaab99d384-srv-cert\") pod \"olm-operator-6b444d44fb-sx29z\" (UID: \"d08826f7-331b-4ed8-98c2-2faaab99d384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515369 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06e1dd20-1df7-45ad-805b-64235d3521d7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-f945r\" (UID: \"06e1dd20-1df7-45ad-805b-64235d3521d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515401 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515418 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4761e6d8-f523-4c75-bfdb-76a523dfbede-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqxcc\" (UID: \"4761e6d8-f523-4c75-bfdb-76a523dfbede\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515435 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4761e6d8-f523-4c75-bfdb-76a523dfbede-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqxcc\" (UID: \"4761e6d8-f523-4c75-bfdb-76a523dfbede\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515453 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f2305b0f-2d33-4c71-a687-0533fe97d02d-signing-key\") pod \"service-ca-9c57cc56f-pcdgm\" (UID: \"f2305b0f-2d33-4c71-a687-0533fe97d02d\") " pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515471 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-tls\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515488 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d3c04e5-aec8-4352-9eae-ab6e388931eb-proxy-tls\") pod \"machine-config-controller-84d6567774-7sbh6\" (UID: \"3d3c04e5-aec8-4352-9eae-ab6e388931eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515505 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d3c04e5-aec8-4352-9eae-ab6e388931eb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7sbh6\" (UID: \"3d3c04e5-aec8-4352-9eae-ab6e388931eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515521 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjqwr\" (UniqueName: \"kubernetes.io/projected/72cc9605-664e-4897-93c5-b386c20517d1-kube-api-access-fjqwr\") pod \"dns-default-wb4kv\" (UID: \"72cc9605-664e-4897-93c5-b386c20517d1\") " pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515539 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp7dn\" (UniqueName: \"kubernetes.io/projected/9c39ec96-c5ce-40f8-80b5-68baacd59516-kube-api-access-sp7dn\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515554 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zmf\" (UniqueName: \"kubernetes.io/projected/6fe9b00e-af32-4089-8a31-d0c1735d001f-kube-api-access-48zmf\") pod \"machine-config-server-22gbx\" (UID: \"6fe9b00e-af32-4089-8a31-d0c1735d001f\") " pod="openshift-machine-config-operator/machine-config-server-22gbx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515582 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-bound-sa-token\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515600 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czhkh\" (UniqueName: \"kubernetes.io/projected/96f8638c-32af-4ed6-87de-790aa042e15c-kube-api-access-czhkh\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515620 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06e1dd20-1df7-45ad-805b-64235d3521d7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-f945r\" (UID: \"06e1dd20-1df7-45ad-805b-64235d3521d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515636 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f2305b0f-2d33-4c71-a687-0533fe97d02d-signing-cabundle\") pod \"service-ca-9c57cc56f-pcdgm\" (UID: \"f2305b0f-2d33-4c71-a687-0533fe97d02d\") " pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515655 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w6wp\" (UniqueName: \"kubernetes.io/projected/ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98-kube-api-access-6w6wp\") pod \"package-server-manager-789f6589d5-c7csh\" (UID: \"ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515674 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d705d04a-92ce-41c8-91f8-c035a8dfa98a-webhook-cert\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515702 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-dir\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515718 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b221011-ac95-4cb7-8374-a7b61f91ce72-serving-cert\") pod \"service-ca-operator-777779d784-n5vw7\" (UID: \"2b221011-ac95-4cb7-8374-a7b61f91ce72\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515733 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-w4jl6\" (UID: \"46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515750 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d5c56cc-9bff-4531-b57f-6a14c1d0802a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jjgzj\" (UID: \"8d5c56cc-9bff-4531-b57f-6a14c1d0802a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515766 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72cc9605-664e-4897-93c5-b386c20517d1-config-volume\") pod \"dns-default-wb4kv\" (UID: \"72cc9605-664e-4897-93c5-b386c20517d1\") " pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515786 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515804 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-metrics-tls\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515822 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-trusted-ca\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515839 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515855 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-w4jl6\" (UID: \"46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515888 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2b62e00-db4e-4ce9-8dd1-717159043f83-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dp6dw\" (UID: \"c2b62e00-db4e-4ce9-8dd1-717159043f83\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515911 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhp2m\" (UniqueName: \"kubernetes.io/projected/d705d04a-92ce-41c8-91f8-c035a8dfa98a-kube-api-access-vhp2m\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515916 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.515939 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-policies\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: E0129 16:14:17.516034 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:18.016015083 +0000 UTC m=+141.818992347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.518611 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dced6426-2e21-4397-9074-92e06a03409a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fcftn\" (UID: \"dced6426-2e21-4397-9074-92e06a03409a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.520838 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.521832 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-trusted-ca\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.522577 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bccbee59-7684-41ba-900e-a9e5bf164b80-srv-cert\") pod \"catalog-operator-68c6474976-8xfpj\" (UID: \"bccbee59-7684-41ba-900e-a9e5bf164b80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.527130 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.527227 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-certificates\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.527926 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9414a16a-b347-4f0e-af41-9ff94ea7cc05-serving-cert\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.528184 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dced6426-2e21-4397-9074-92e06a03409a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fcftn\" (UID: \"dced6426-2e21-4397-9074-92e06a03409a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.528469 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-dir\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.528546 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06e1dd20-1df7-45ad-805b-64235d3521d7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-f945r\" (UID: \"06e1dd20-1df7-45ad-805b-64235d3521d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.529269 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bccbee59-7684-41ba-900e-a9e5bf164b80-profile-collector-cert\") pod \"catalog-operator-68c6474976-8xfpj\" (UID: \"bccbee59-7684-41ba-900e-a9e5bf164b80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.530037 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a22956a0-592e-4592-9237-6c42acfa8b56-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r7srz\" (UID: \"a22956a0-592e-4592-9237-6c42acfa8b56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.531665 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d3c04e5-aec8-4352-9eae-ab6e388931eb-proxy-tls\") pod \"machine-config-controller-84d6567774-7sbh6\" (UID: \"3d3c04e5-aec8-4352-9eae-ab6e388931eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.531967 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.532072 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.532483 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e1dd20-1df7-45ad-805b-64235d3521d7-config\") pod \"kube-controller-manager-operator-78b949d7b-f945r\" (UID: \"06e1dd20-1df7-45ad-805b-64235d3521d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.533982 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3d3c04e5-aec8-4352-9eae-ab6e388931eb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7sbh6\" (UID: \"3d3c04e5-aec8-4352-9eae-ab6e388931eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.535430 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.536394 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a22956a0-592e-4592-9237-6c42acfa8b56-config\") pod \"kube-apiserver-operator-766d6c64bb-r7srz\" (UID: \"a22956a0-592e-4592-9237-6c42acfa8b56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.536704 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.537380 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4761e6d8-f523-4c75-bfdb-76a523dfbede-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqxcc\" (UID: \"4761e6d8-f523-4c75-bfdb-76a523dfbede\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.537600 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4761e6d8-f523-4c75-bfdb-76a523dfbede-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqxcc\" (UID: \"4761e6d8-f523-4c75-bfdb-76a523dfbede\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.538142 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-metrics-tls\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.539274 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-trusted-ca\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.539453 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-policies\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.540142 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9414a16a-b347-4f0e-af41-9ff94ea7cc05-config\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.541408 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-tls\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.544339 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.544978 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.545789 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.547242 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c7csh\" (UID: \"ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.547917 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.555574 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.562310 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.563195 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9414a16a-b347-4f0e-af41-9ff94ea7cc05-trusted-ca\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.589442 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.603702 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7bcx\" (UniqueName: \"kubernetes.io/projected/dced6426-2e21-4397-9074-92e06a03409a-kube-api-access-p7bcx\") pod \"openshift-controller-manager-operator-756b6f6bc6-fcftn\" (UID: \"dced6426-2e21-4397-9074-92e06a03409a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.609412 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5hlk\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-kube-api-access-l5hlk\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617218 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76pr9\" (UniqueName: \"kubernetes.io/projected/c00dfac3-8de9-4673-a6b4-2965a204accb-kube-api-access-76pr9\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617268 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl2gb\" (UniqueName: \"kubernetes.io/projected/8d5c56cc-9bff-4531-b57f-6a14c1d0802a-kube-api-access-rl2gb\") pod \"multus-admission-controller-857f4d67dd-jjgzj\" (UID: \"8d5c56cc-9bff-4531-b57f-6a14c1d0802a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617319 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cldws\" (UniqueName: \"kubernetes.io/projected/46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a-kube-api-access-cldws\") pod \"kube-storage-version-migrator-operator-b67b599dd-w4jl6\" (UID: \"46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617344 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72cc9605-664e-4897-93c5-b386c20517d1-metrics-tls\") pod \"dns-default-wb4kv\" (UID: \"72cc9605-664e-4897-93c5-b386c20517d1\") " pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617380 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d08826f7-331b-4ed8-98c2-2faaab99d384-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sx29z\" (UID: \"d08826f7-331b-4ed8-98c2-2faaab99d384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617414 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d705d04a-92ce-41c8-91f8-c035a8dfa98a-apiservice-cert\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617440 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6fe9b00e-af32-4089-8a31-d0c1735d001f-certs\") pod \"machine-config-server-22gbx\" (UID: \"6fe9b00e-af32-4089-8a31-d0c1735d001f\") " pod="openshift-machine-config-operator/machine-config-server-22gbx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617467 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c39ec96-c5ce-40f8-80b5-68baacd59516-stats-auth\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617488 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z7kz\" (UniqueName: \"kubernetes.io/projected/d08826f7-331b-4ed8-98c2-2faaab99d384-kube-api-access-6z7kz\") pod \"olm-operator-6b444d44fb-sx29z\" (UID: \"d08826f7-331b-4ed8-98c2-2faaab99d384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617512 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j8jdx\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617535 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgxw5\" (UniqueName: \"kubernetes.io/projected/249177f9-7b1e-4d39-a400-e625862f53c3-kube-api-access-kgxw5\") pod \"marketplace-operator-79b997595-j8jdx\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617561 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-plugins-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617604 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j8jdx\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617631 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c39ec96-c5ce-40f8-80b5-68baacd59516-default-certificate\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617654 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c39ec96-c5ce-40f8-80b5-68baacd59516-service-ca-bundle\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617682 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlmr\" (UniqueName: \"kubernetes.io/projected/f2305b0f-2d33-4c71-a687-0533fe97d02d-kube-api-access-hjlmr\") pod \"service-ca-9c57cc56f-pcdgm\" (UID: \"f2305b0f-2d33-4c71-a687-0533fe97d02d\") " pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617713 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b221011-ac95-4cb7-8374-a7b61f91ce72-config\") pod \"service-ca-operator-777779d784-n5vw7\" (UID: \"2b221011-ac95-4cb7-8374-a7b61f91ce72\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617739 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-socket-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617768 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-registration-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617792 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-csi-data-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617821 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c39ec96-c5ce-40f8-80b5-68baacd59516-metrics-certs\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617847 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzm4n\" (UniqueName: \"kubernetes.io/projected/c2b62e00-db4e-4ce9-8dd1-717159043f83-kube-api-access-pzm4n\") pod \"control-plane-machine-set-operator-78cbb6b69f-dp6dw\" (UID: \"c2b62e00-db4e-4ce9-8dd1-717159043f83\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617894 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/253b52d8-4b46-4b35-a97d-8dd6f1a00cb0-cert\") pod \"ingress-canary-5tcbt\" (UID: \"253b52d8-4b46-4b35-a97d-8dd6f1a00cb0\") " pod="openshift-ingress-canary/ingress-canary-5tcbt" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617935 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6fe9b00e-af32-4089-8a31-d0c1735d001f-node-bootstrap-token\") pod \"machine-config-server-22gbx\" (UID: \"6fe9b00e-af32-4089-8a31-d0c1735d001f\") " pod="openshift-machine-config-operator/machine-config-server-22gbx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.617981 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96f8638c-32af-4ed6-87de-790aa042e15c-proxy-tls\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618014 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grgq8\" (UniqueName: \"kubernetes.io/projected/253b52d8-4b46-4b35-a97d-8dd6f1a00cb0-kube-api-access-grgq8\") pod \"ingress-canary-5tcbt\" (UID: \"253b52d8-4b46-4b35-a97d-8dd6f1a00cb0\") " pod="openshift-ingress-canary/ingress-canary-5tcbt" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618038 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da56ae41-00cb-4345-a6be-1ceb542b8afe-config-volume\") pod \"collect-profiles-29495040-q8q6g\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618061 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da56ae41-00cb-4345-a6be-1ceb542b8afe-secret-volume\") pod \"collect-profiles-29495040-q8q6g\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618091 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618121 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sgfq\" (UniqueName: \"kubernetes.io/projected/2b221011-ac95-4cb7-8374-a7b61f91ce72-kube-api-access-7sgfq\") pod \"service-ca-operator-777779d784-n5vw7\" (UID: \"2b221011-ac95-4cb7-8374-a7b61f91ce72\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618148 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrdkd\" (UniqueName: \"kubernetes.io/projected/da56ae41-00cb-4345-a6be-1ceb542b8afe-kube-api-access-nrdkd\") pod \"collect-profiles-29495040-q8q6g\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618171 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d705d04a-92ce-41c8-91f8-c035a8dfa98a-tmpfs\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618199 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-mountpoint-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618223 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/96f8638c-32af-4ed6-87de-790aa042e15c-images\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618247 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96f8638c-32af-4ed6-87de-790aa042e15c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618271 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d08826f7-331b-4ed8-98c2-2faaab99d384-srv-cert\") pod \"olm-operator-6b444d44fb-sx29z\" (UID: \"d08826f7-331b-4ed8-98c2-2faaab99d384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618519 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f2305b0f-2d33-4c71-a687-0533fe97d02d-signing-key\") pod \"service-ca-9c57cc56f-pcdgm\" (UID: \"f2305b0f-2d33-4c71-a687-0533fe97d02d\") " pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618574 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjqwr\" (UniqueName: \"kubernetes.io/projected/72cc9605-664e-4897-93c5-b386c20517d1-kube-api-access-fjqwr\") pod \"dns-default-wb4kv\" (UID: \"72cc9605-664e-4897-93c5-b386c20517d1\") " pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618604 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp7dn\" (UniqueName: \"kubernetes.io/projected/9c39ec96-c5ce-40f8-80b5-68baacd59516-kube-api-access-sp7dn\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618651 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48zmf\" (UniqueName: \"kubernetes.io/projected/6fe9b00e-af32-4089-8a31-d0c1735d001f-kube-api-access-48zmf\") pod \"machine-config-server-22gbx\" (UID: \"6fe9b00e-af32-4089-8a31-d0c1735d001f\") " pod="openshift-machine-config-operator/machine-config-server-22gbx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618742 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czhkh\" (UniqueName: \"kubernetes.io/projected/96f8638c-32af-4ed6-87de-790aa042e15c-kube-api-access-czhkh\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618781 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f2305b0f-2d33-4c71-a687-0533fe97d02d-signing-cabundle\") pod \"service-ca-9c57cc56f-pcdgm\" (UID: \"f2305b0f-2d33-4c71-a687-0533fe97d02d\") " pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618831 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d705d04a-92ce-41c8-91f8-c035a8dfa98a-webhook-cert\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618860 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b221011-ac95-4cb7-8374-a7b61f91ce72-serving-cert\") pod \"service-ca-operator-777779d784-n5vw7\" (UID: \"2b221011-ac95-4cb7-8374-a7b61f91ce72\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618909 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-w4jl6\" (UID: \"46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618933 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d5c56cc-9bff-4531-b57f-6a14c1d0802a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jjgzj\" (UID: \"8d5c56cc-9bff-4531-b57f-6a14c1d0802a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.618979 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72cc9605-664e-4897-93c5-b386c20517d1-config-volume\") pod \"dns-default-wb4kv\" (UID: \"72cc9605-664e-4897-93c5-b386c20517d1\") " pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.619017 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-w4jl6\" (UID: \"46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.619082 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2b62e00-db4e-4ce9-8dd1-717159043f83-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dp6dw\" (UID: \"c2b62e00-db4e-4ce9-8dd1-717159043f83\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.619111 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhp2m\" (UniqueName: \"kubernetes.io/projected/d705d04a-92ce-41c8-91f8-c035a8dfa98a-kube-api-access-vhp2m\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.622145 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-csi-data-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.623247 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72cc9605-664e-4897-93c5-b386c20517d1-metrics-tls\") pod \"dns-default-wb4kv\" (UID: \"72cc9605-664e-4897-93c5-b386c20517d1\") " pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.623833 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72cc9605-664e-4897-93c5-b386c20517d1-config-volume\") pod \"dns-default-wb4kv\" (UID: \"72cc9605-664e-4897-93c5-b386c20517d1\") " pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.624220 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f2305b0f-2d33-4c71-a687-0533fe97d02d-signing-cabundle\") pod \"service-ca-9c57cc56f-pcdgm\" (UID: \"f2305b0f-2d33-4c71-a687-0533fe97d02d\") " pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.624520 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-w4jl6\" (UID: \"46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.625315 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.651778 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d5c56cc-9bff-4531-b57f-6a14c1d0802a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jjgzj\" (UID: \"8d5c56cc-9bff-4531-b57f-6a14c1d0802a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.653094 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j8jdx\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.653807 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d705d04a-92ce-41c8-91f8-c035a8dfa98a-apiservice-cert\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.654937 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9c39ec96-c5ce-40f8-80b5-68baacd59516-metrics-certs\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.655668 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b221011-ac95-4cb7-8374-a7b61f91ce72-serving-cert\") pod \"service-ca-operator-777779d784-n5vw7\" (UID: \"2b221011-ac95-4cb7-8374-a7b61f91ce72\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.655806 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2b62e00-db4e-4ce9-8dd1-717159043f83-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dp6dw\" (UID: \"c2b62e00-db4e-4ce9-8dd1-717159043f83\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.656353 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9c39ec96-c5ce-40f8-80b5-68baacd59516-default-certificate\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.656500 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d08826f7-331b-4ed8-98c2-2faaab99d384-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sx29z\" (UID: \"d08826f7-331b-4ed8-98c2-2faaab99d384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.656507 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d705d04a-92ce-41c8-91f8-c035a8dfa98a-webhook-cert\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.657229 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d08826f7-331b-4ed8-98c2-2faaab99d384-srv-cert\") pod \"olm-operator-6b444d44fb-sx29z\" (UID: \"d08826f7-331b-4ed8-98c2-2faaab99d384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.657236 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f2305b0f-2d33-4c71-a687-0533fe97d02d-signing-key\") pod \"service-ca-9c57cc56f-pcdgm\" (UID: \"f2305b0f-2d33-4c71-a687-0533fe97d02d\") " pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.658131 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c39ec96-c5ce-40f8-80b5-68baacd59516-service-ca-bundle\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.658466 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/253b52d8-4b46-4b35-a97d-8dd6f1a00cb0-cert\") pod \"ingress-canary-5tcbt\" (UID: \"253b52d8-4b46-4b35-a97d-8dd6f1a00cb0\") " pod="openshift-ingress-canary/ingress-canary-5tcbt" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.658495 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-socket-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.658557 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-registration-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.658656 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d705d04a-92ce-41c8-91f8-c035a8dfa98a-tmpfs\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.658723 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-mountpoint-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.659278 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/96f8638c-32af-4ed6-87de-790aa042e15c-images\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.659702 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96f8638c-32af-4ed6-87de-790aa042e15c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.659722 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j8jdx\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.660038 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c00dfac3-8de9-4673-a6b4-2965a204accb-plugins-dir\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.661630 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b221011-ac95-4cb7-8374-a7b61f91ce72-config\") pod \"service-ca-operator-777779d784-n5vw7\" (UID: \"2b221011-ac95-4cb7-8374-a7b61f91ce72\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.663770 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da56ae41-00cb-4345-a6be-1ceb542b8afe-config-volume\") pod \"collect-profiles-29495040-q8q6g\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.668341 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6sc6\" (UniqueName: \"kubernetes.io/projected/3d3c04e5-aec8-4352-9eae-ab6e388931eb-kube-api-access-q6sc6\") pod \"machine-config-controller-84d6567774-7sbh6\" (UID: \"3d3c04e5-aec8-4352-9eae-ab6e388931eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.668599 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6fe9b00e-af32-4089-8a31-d0c1735d001f-node-bootstrap-token\") pod \"machine-config-server-22gbx\" (UID: \"6fe9b00e-af32-4089-8a31-d0c1735d001f\") " pod="openshift-machine-config-operator/machine-config-server-22gbx" Jan 29 16:14:17 crc kubenswrapper[4895]: E0129 16:14:17.669146 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:18.169110575 +0000 UTC m=+141.972087839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.669602 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da56ae41-00cb-4345-a6be-1ceb542b8afe-secret-volume\") pod \"collect-profiles-29495040-q8q6g\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.681692 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-w4jl6\" (UID: \"46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.686463 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9c39ec96-c5ce-40f8-80b5-68baacd59516-stats-auth\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.689203 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06e1dd20-1df7-45ad-805b-64235d3521d7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-f945r\" (UID: \"06e1dd20-1df7-45ad-805b-64235d3521d7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.698974 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6fe9b00e-af32-4089-8a31-d0c1735d001f-certs\") pod \"machine-config-server-22gbx\" (UID: \"6fe9b00e-af32-4089-8a31-d0c1735d001f\") " pod="openshift-machine-config-operator/machine-config-server-22gbx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.699169 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w6wp\" (UniqueName: \"kubernetes.io/projected/ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98-kube-api-access-6w6wp\") pod \"package-server-manager-789f6589d5-c7csh\" (UID: \"ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.700931 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bq54w"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.702246 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js4b9\" (UniqueName: \"kubernetes.io/projected/bccbee59-7684-41ba-900e-a9e5bf164b80-kube-api-access-js4b9\") pod \"catalog-operator-68c6474976-8xfpj\" (UID: \"bccbee59-7684-41ba-900e-a9e5bf164b80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.705976 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96f8638c-32af-4ed6-87de-790aa042e15c-proxy-tls\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.708517 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qt6sq"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.713303 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t675c"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.719804 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:17 crc kubenswrapper[4895]: E0129 16:14:17.720355 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:18.22033542 +0000 UTC m=+142.023312684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.721772 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a22956a0-592e-4592-9237-6c42acfa8b56-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r7srz\" (UID: \"a22956a0-592e-4592-9237-6c42acfa8b56\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.748182 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-p6ck2"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.754160 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4761e6d8-f523-4c75-bfdb-76a523dfbede-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqxcc\" (UID: \"4761e6d8-f523-4c75-bfdb-76a523dfbede\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.755774 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2pkk\" (UniqueName: \"kubernetes.io/projected/85eba540-f075-4ac1-9667-f462b08ebefa-kube-api-access-n2pkk\") pod \"migrator-59844c95c7-g9q58\" (UID: \"85eba540-f075-4ac1-9667-f462b08ebefa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.756120 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6bcff"] Jan 29 16:14:17 crc kubenswrapper[4895]: W0129 16:14:17.756376 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dda0f03_a7e0_442d_b684_9b6b5a1885ab.slice/crio-6c19650e86e1bf6b2a8e2251d2e01e79a520b20b1d2edbb1913d174aca114380 WatchSource:0}: Error finding container 6c19650e86e1bf6b2a8e2251d2e01e79a520b20b1d2edbb1913d174aca114380: Status 404 returned error can't find the container with id 6c19650e86e1bf6b2a8e2251d2e01e79a520b20b1d2edbb1913d174aca114380 Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.766579 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.773533 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4tp\" (UniqueName: \"kubernetes.io/projected/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-kube-api-access-ct4tp\") pod \"oauth-openshift-558db77b4-pmkl9\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.775318 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.787829 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndq8w\" (UniqueName: \"kubernetes.io/projected/7d2b65e1-1f81-46fd-8cbf-bf1797d8852d-kube-api-access-ndq8w\") pod \"ingress-operator-5b745b69d9-bmmgq\" (UID: \"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.791839 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.807767 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd"] Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.809249 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.811262 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvx9j\" (UniqueName: \"kubernetes.io/projected/9414a16a-b347-4f0e-af41-9ff94ea7cc05-kube-api-access-zvx9j\") pod \"console-operator-58897d9998-snck7\" (UID: \"9414a16a-b347-4f0e-af41-9ff94ea7cc05\") " pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.822140 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: E0129 16:14:17.822823 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:18.322702069 +0000 UTC m=+142.125679333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.830715 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-bound-sa-token\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: W0129 16:14:17.833112 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4c99b9a_0b66_49b1_8d55_72a6bea92059.slice/crio-3abb179492963ce96b8f9a1db6d88a8f9fb2e44b55246f9663b367fcd06a9f02 WatchSource:0}: Error finding container 3abb179492963ce96b8f9a1db6d88a8f9fb2e44b55246f9663b367fcd06a9f02: Status 404 returned error can't find the container with id 3abb179492963ce96b8f9a1db6d88a8f9fb2e44b55246f9663b367fcd06a9f02 Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.841883 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.851108 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.859077 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.865020 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" event={"ID":"01e50c09-6efb-44d3-8919-dcea4b9f6a72","Type":"ContainerStarted","Data":"f782c6ba156161f4b17f0f951119697c69640b53529c5cc807835a8b0613ccfd"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.865236 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76pr9\" (UniqueName: \"kubernetes.io/projected/c00dfac3-8de9-4673-a6b4-2965a204accb-kube-api-access-76pr9\") pod \"csi-hostpathplugin-xc67m\" (UID: \"c00dfac3-8de9-4673-a6b4-2965a204accb\") " pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.867217 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.872855 4895 generic.go:334] "Generic (PLEG): container finished" podID="3acaf5b1-0f37-4157-85e0-926718993903" containerID="5269c2f9cb8c1b4ed0e6b9fe1873911e0662fc891b177841e83813c508074596" exitCode=0 Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.872948 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" event={"ID":"3acaf5b1-0f37-4157-85e0-926718993903","Type":"ContainerDied","Data":"5269c2f9cb8c1b4ed0e6b9fe1873911e0662fc891b177841e83813c508074596"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.872986 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" event={"ID":"3acaf5b1-0f37-4157-85e0-926718993903","Type":"ContainerStarted","Data":"fba49367d8ad8fc91e16e6040d102b377dd773a8b758b8c395979c778b7b3014"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.877468 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" event={"ID":"53da942c-31af-4b5b-9e63-4e53147ad257","Type":"ContainerStarted","Data":"d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.877509 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" event={"ID":"53da942c-31af-4b5b-9e63-4e53147ad257","Type":"ContainerStarted","Data":"2f8412e439ff1e93a65449219a0d4b00122bb2cd151d75b763d7bccda725ab56"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.878087 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.882451 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.884210 4895 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-r8fhh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.884342 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" podUID="53da942c-31af-4b5b-9e63-4e53147ad257" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.884942 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" event={"ID":"2c9ef793-c9ca-4c0a-9ab0-09115c564646","Type":"ContainerStarted","Data":"4ec3b1b6c57e412498f0265a0474eb3753c63853b88ef47518f4813f50961bd6"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.885042 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" event={"ID":"2c9ef793-c9ca-4c0a-9ab0-09115c564646","Type":"ContainerStarted","Data":"def5c4b9ec456770f282d449e5b4ee17dc3136abf3feac5ca1b2620e17ace7e5"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.885111 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" event={"ID":"2c9ef793-c9ca-4c0a-9ab0-09115c564646","Type":"ContainerStarted","Data":"ff7bad875f09a9519d257077ac09d2adf59fa39b9f559b122612f3b5c376f982"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.887337 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl2gb\" (UniqueName: \"kubernetes.io/projected/8d5c56cc-9bff-4531-b57f-6a14c1d0802a-kube-api-access-rl2gb\") pod \"multus-admission-controller-857f4d67dd-jjgzj\" (UID: \"8d5c56cc-9bff-4531-b57f-6a14c1d0802a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.890969 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.894667 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" event={"ID":"84c1ec53-3121-4cd0-9665-4c8a43864032","Type":"ContainerStarted","Data":"80b59581ea5ab4048c9d01142208cf7b2d56cd56ab31cf95dd3e026c864da880"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.897536 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.902873 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t675c" event={"ID":"9dda0f03-a7e0-442d-b684-9b6b5a1885ab","Type":"ContainerStarted","Data":"6c19650e86e1bf6b2a8e2251d2e01e79a520b20b1d2edbb1913d174aca114380"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.903810 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.909008 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cldws\" (UniqueName: \"kubernetes.io/projected/46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a-kube-api-access-cldws\") pod \"kube-storage-version-migrator-operator-b67b599dd-w4jl6\" (UID: \"46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.910516 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" event={"ID":"9d07af33-8811-4683-883c-6e20ef713ea4","Type":"ContainerStarted","Data":"9e1a2d5a6618d93cd8ece303c70df062fc7f9bbcce102eeab14a74c36028a0b5"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.910565 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" event={"ID":"9d07af33-8811-4683-883c-6e20ef713ea4","Type":"ContainerStarted","Data":"0a952fdb3db6b6343a3dbf2434c99ad7df46437af88ee93498844dda33b7aaeb"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.911240 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.923708 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:17 crc kubenswrapper[4895]: E0129 16:14:17.924144 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:18.424109305 +0000 UTC m=+142.227086569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.924658 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.925683 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" event={"ID":"b8098be9-256f-478d-a218-3e74a5ef8ca9","Type":"ContainerStarted","Data":"817dc4b2d8429e7be07c071e39f8ade869fd6ec235420022d001c30fe84b42d2"} Jan 29 16:14:17 crc kubenswrapper[4895]: E0129 16:14:17.926344 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:18.426333708 +0000 UTC m=+142.229310972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.927297 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" event={"ID":"a4c99b9a-0b66-49b1-8d55-72a6bea92059","Type":"ContainerStarted","Data":"3abb179492963ce96b8f9a1db6d88a8f9fb2e44b55246f9663b367fcd06a9f02"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.934860 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhp2m\" (UniqueName: \"kubernetes.io/projected/d705d04a-92ce-41c8-91f8-c035a8dfa98a-kube-api-access-vhp2m\") pod \"packageserver-d55dfcdfc-nsjmx\" (UID: \"d705d04a-92ce-41c8-91f8-c035a8dfa98a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.952210 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" event={"ID":"5e29d559-3a15-4a8f-9494-6c5d4cf4c642","Type":"ContainerStarted","Data":"1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.952296 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" event={"ID":"5e29d559-3a15-4a8f-9494-6c5d4cf4c642","Type":"ContainerStarted","Data":"e6385a9f789d4f55010636af2ae666e0e00bb1234ae3fd362a059bcfd4984cc6"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.952723 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.955656 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjqwr\" (UniqueName: \"kubernetes.io/projected/72cc9605-664e-4897-93c5-b386c20517d1-kube-api-access-fjqwr\") pod \"dns-default-wb4kv\" (UID: \"72cc9605-664e-4897-93c5-b386c20517d1\") " pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.962623 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p6ck2" event={"ID":"6dd34441-4294-4e90-9f2d-909c5aecdff7","Type":"ContainerStarted","Data":"3b8ca7f5945c3d5a02566f18377b06f53416b7a8d3ca876bdae842e5f8caf23a"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.967381 4895 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-z4xsp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.967444 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" podUID="5e29d559-3a15-4a8f-9494-6c5d4cf4c642" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.967842 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xc67m" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.969943 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" event={"ID":"324d5ea2-a2d9-4001-8892-3aa92d6e323e","Type":"ContainerStarted","Data":"7150ebf4152b3ab27c0eb89cd887d134e48be110164c0825ceb318dca94dc44a"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.970000 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" event={"ID":"324d5ea2-a2d9-4001-8892-3aa92d6e323e","Type":"ContainerStarted","Data":"2101566b5fa68ae78d89cc0e58d9034ad6b9328bf9c97a9cc2ec4377d7b02743"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.974107 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" event={"ID":"01f63105-32cc-4bc3-a677-8d0a7d967af2","Type":"ContainerStarted","Data":"91b7cc209b1a6a2caf6094a32f258f310006610ec00615c3ce6b7db464cda8d9"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.981294 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp7dn\" (UniqueName: \"kubernetes.io/projected/9c39ec96-c5ce-40f8-80b5-68baacd59516-kube-api-access-sp7dn\") pod \"router-default-5444994796-s5x8l\" (UID: \"9c39ec96-c5ce-40f8-80b5-68baacd59516\") " pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.981716 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" event={"ID":"a74a90ae-daab-4183-88d5-4cb49b9ed96e","Type":"ContainerStarted","Data":"3faa5a8ef3871d7c69dd07ee63b643e93ba5bf4e517a19cad4064d7b2d59ba3f"} Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.989887 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48zmf\" (UniqueName: \"kubernetes.io/projected/6fe9b00e-af32-4089-8a31-d0c1735d001f-kube-api-access-48zmf\") pod \"machine-config-server-22gbx\" (UID: \"6fe9b00e-af32-4089-8a31-d0c1735d001f\") " pod="openshift-machine-config-operator/machine-config-server-22gbx" Jan 29 16:14:17 crc kubenswrapper[4895]: I0129 16:14:17.992930 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.008930 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-22gbx" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.010256 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czhkh\" (UniqueName: \"kubernetes.io/projected/96f8638c-32af-4ed6-87de-790aa042e15c-kube-api-access-czhkh\") pod \"machine-config-operator-74547568cd-dqlvl\" (UID: \"96f8638c-32af-4ed6-87de-790aa042e15c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.026730 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:18 crc kubenswrapper[4895]: E0129 16:14:18.028924 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:18.528855789 +0000 UTC m=+142.331833053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.029080 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzm4n\" (UniqueName: \"kubernetes.io/projected/c2b62e00-db4e-4ce9-8dd1-717159043f83-kube-api-access-pzm4n\") pod \"control-plane-machine-set-operator-78cbb6b69f-dp6dw\" (UID: \"c2b62e00-db4e-4ce9-8dd1-717159043f83\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.035642 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn"] Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.047081 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sgfq\" (UniqueName: \"kubernetes.io/projected/2b221011-ac95-4cb7-8374-a7b61f91ce72-kube-api-access-7sgfq\") pod \"service-ca-operator-777779d784-n5vw7\" (UID: \"2b221011-ac95-4cb7-8374-a7b61f91ce72\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.085191 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z7kz\" (UniqueName: \"kubernetes.io/projected/d08826f7-331b-4ed8-98c2-2faaab99d384-kube-api-access-6z7kz\") pod \"olm-operator-6b444d44fb-sx29z\" (UID: \"d08826f7-331b-4ed8-98c2-2faaab99d384\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.091204 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrdkd\" (UniqueName: \"kubernetes.io/projected/da56ae41-00cb-4345-a6be-1ceb542b8afe-kube-api-access-nrdkd\") pod \"collect-profiles-29495040-q8q6g\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.102613 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.111636 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgxw5\" (UniqueName: \"kubernetes.io/projected/249177f9-7b1e-4d39-a400-e625862f53c3-kube-api-access-kgxw5\") pod \"marketplace-operator-79b997595-j8jdx\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.129285 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:18 crc kubenswrapper[4895]: E0129 16:14:18.130198 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:18.630144543 +0000 UTC m=+142.433121807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.133499 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.136184 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grgq8\" (UniqueName: \"kubernetes.io/projected/253b52d8-4b46-4b35-a97d-8dd6f1a00cb0-kube-api-access-grgq8\") pod \"ingress-canary-5tcbt\" (UID: \"253b52d8-4b46-4b35-a97d-8dd6f1a00cb0\") " pod="openshift-ingress-canary/ingress-canary-5tcbt" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.176095 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.190970 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjlmr\" (UniqueName: \"kubernetes.io/projected/f2305b0f-2d33-4c71-a687-0533fe97d02d-kube-api-access-hjlmr\") pod \"service-ca-9c57cc56f-pcdgm\" (UID: \"f2305b0f-2d33-4c71-a687-0533fe97d02d\") " pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.224572 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.224948 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.234612 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.238275 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:18 crc kubenswrapper[4895]: E0129 16:14:18.239208 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:18.739173649 +0000 UTC m=+142.542150913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.241965 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.248888 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.254705 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pmkl9"] Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.299905 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.300485 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.301560 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5tcbt" Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.315939 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc"] Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.319968 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jjgzj"] Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.340033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:18 crc kubenswrapper[4895]: E0129 16:14:18.340954 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:18.840938783 +0000 UTC m=+142.643916047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.446761 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:18 crc kubenswrapper[4895]: E0129 16:14:18.447349 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:18.947325506 +0000 UTC m=+142.750302770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.551822 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:18 crc kubenswrapper[4895]: E0129 16:14:18.552183 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:19.052168783 +0000 UTC m=+142.855146047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.656537 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:18 crc kubenswrapper[4895]: E0129 16:14:18.657281 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:19.157261886 +0000 UTC m=+142.960239150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.761310 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:18 crc kubenswrapper[4895]: E0129 16:14:18.761695 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:19.261681333 +0000 UTC m=+143.064658597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.868573 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:18 crc kubenswrapper[4895]: E0129 16:14:18.869983 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:19.369962111 +0000 UTC m=+143.172939375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:18 crc kubenswrapper[4895]: I0129 16:14:18.971187 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:18 crc kubenswrapper[4895]: E0129 16:14:18.971647 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:19.471630043 +0000 UTC m=+143.274607307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.074403 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:19 crc kubenswrapper[4895]: E0129 16:14:19.075755 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:19.575728722 +0000 UTC m=+143.378705986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.178315 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:19 crc kubenswrapper[4895]: E0129 16:14:19.181592 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:19.681572623 +0000 UTC m=+143.484549887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.270845 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" event={"ID":"8d5c56cc-9bff-4531-b57f-6a14c1d0802a","Type":"ContainerStarted","Data":"575600387f6514a387e946ad04d664542336a8ed2886d336f7620f4a94bfdff1"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.281019 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:19 crc kubenswrapper[4895]: E0129 16:14:19.281468 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:19.781449864 +0000 UTC m=+143.584427128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.290003 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" event={"ID":"4761e6d8-f523-4c75-bfdb-76a523dfbede","Type":"ContainerStarted","Data":"82e4c3b40608bfd842af5b2e16d6b18dfcc9bb6fe4eff2f6dc16b90735ac304e"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.313729 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t675c" event={"ID":"9dda0f03-a7e0-442d-b684-9b6b5a1885ab","Type":"ContainerStarted","Data":"488c571e2b7fb5311a39b80227aee282e675ca503fce7eacdf94255644894c31"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.314389 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t675c" Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.320630 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" event={"ID":"a4c99b9a-0b66-49b1-8d55-72a6bea92059","Type":"ContainerStarted","Data":"af9fe3d74adece32ceb05f0df43fd58967a7a2779b659967da167aaabf69afc5"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.322523 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" event={"ID":"01e50c09-6efb-44d3-8919-dcea4b9f6a72","Type":"ContainerStarted","Data":"87f6c925fdf6c45f3e240f35d6476745563ca8d39b8ac76ccfde0a612f968155"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.326822 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p6ck2" event={"ID":"6dd34441-4294-4e90-9f2d-909c5aecdff7","Type":"ContainerStarted","Data":"13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.355906 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-22gbx" event={"ID":"6fe9b00e-af32-4089-8a31-d0c1735d001f","Type":"ContainerStarted","Data":"ab61f9c6e3427f5193d36fc87ff5f41b9448bbaa08c7e7df1087fda77b4224c3"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.358929 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" event={"ID":"ae39172d-1e70-4e7c-8222-0cdaf8e645d6","Type":"ContainerStarted","Data":"1625011d40cbcb846aba1086006d4ff223f781d916e4a16becb24b8d3e037b24"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.364893 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" event={"ID":"84c1ec53-3121-4cd0-9665-4c8a43864032","Type":"ContainerStarted","Data":"2a29ac5847496377388b22d5b644be0eec90f2bcbf92add642c7400a0ac9b073"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.385921 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" event={"ID":"dced6426-2e21-4397-9074-92e06a03409a","Type":"ContainerStarted","Data":"76bb0fb53d1e43c821443cc7cc8f46467e10f66fed9ac50b8bf1f82f7780c01c"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.388389 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:19 crc kubenswrapper[4895]: E0129 16:14:19.391116 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:19.891094984 +0000 UTC m=+143.694072328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.405526 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" event={"ID":"164e7cd8-e106-4eab-81ea-bc00c73daf2a","Type":"ContainerStarted","Data":"b9fc8ebd70ba72a0fea79f6340b72ca0ce0cee3960cb985765bc11227fb383ee"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.407894 4895 generic.go:334] "Generic (PLEG): container finished" podID="01f63105-32cc-4bc3-a677-8d0a7d967af2" containerID="4d59bd0ee529d2774cfe25238b6278b981f43a5193cdd96fc5f94066c78ecf17" exitCode=0 Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.407969 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" event={"ID":"01f63105-32cc-4bc3-a677-8d0a7d967af2","Type":"ContainerDied","Data":"4d59bd0ee529d2774cfe25238b6278b981f43a5193cdd96fc5f94066c78ecf17"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.424531 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" event={"ID":"b8098be9-256f-478d-a218-3e74a5ef8ca9","Type":"ContainerStarted","Data":"6833556e80762e006a1c5d749ffebece7d30f23fe1c19dd9d52882e394d07778"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.430801 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-s5x8l" event={"ID":"9c39ec96-c5ce-40f8-80b5-68baacd59516","Type":"ContainerStarted","Data":"2333c8d2f919bb93e4ae80ecaafa1d9538b7771d69b3a30a7ebbb2cc79bb9b8d"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.438774 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-snck7"] Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.443308 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" event={"ID":"a74a90ae-daab-4183-88d5-4cb49b9ed96e","Type":"ContainerStarted","Data":"75d6a999f5d0e2dcfa9e02ae22c4a6e3f8286e0388f443e474c0848310b4af01"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.448286 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" podStartSLOduration=122.448264178 podStartE2EDuration="2m2.448264178s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:19.446419855 +0000 UTC m=+143.249397129" watchObservedRunningTime="2026-01-29 16:14:19.448264178 +0000 UTC m=+143.251241442" Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.450237 4895 generic.go:334] "Generic (PLEG): container finished" podID="9d07af33-8811-4683-883c-6e20ef713ea4" containerID="9e1a2d5a6618d93cd8ece303c70df062fc7f9bbcce102eeab14a74c36028a0b5" exitCode=0 Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.450441 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" event={"ID":"9d07af33-8811-4683-883c-6e20ef713ea4","Type":"ContainerDied","Data":"9e1a2d5a6618d93cd8ece303c70df062fc7f9bbcce102eeab14a74c36028a0b5"} Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.460988 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj"] Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.466511 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r"] Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.489466 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:19 crc kubenswrapper[4895]: E0129 16:14:19.490413 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:19.990367899 +0000 UTC m=+143.793345173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.504622 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz"] Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.527391 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4s5g2" podStartSLOduration=122.524988864 podStartE2EDuration="2m2.524988864s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:19.521099362 +0000 UTC m=+143.324076636" watchObservedRunningTime="2026-01-29 16:14:19.524988864 +0000 UTC m=+143.327966128" Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.587603 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.587691 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.589411 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.591175 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.591625 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:14:19 crc kubenswrapper[4895]: E0129 16:14:19.596642 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:20.096621669 +0000 UTC m=+143.899598933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.642058 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-4n52t" podStartSLOduration=123.642025357 podStartE2EDuration="2m3.642025357s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:19.6153297 +0000 UTC m=+143.418306974" watchObservedRunningTime="2026-01-29 16:14:19.642025357 +0000 UTC m=+143.445002641" Jan 29 16:14:19 crc kubenswrapper[4895]: W0129 16:14:19.656175 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06e1dd20_1df7_45ad_805b_64235d3521d7.slice/crio-28f9023d58fbf5309139e40cd123f8cc0bb1897fbdc133327890766f6d77230c WatchSource:0}: Error finding container 28f9023d58fbf5309139e40cd123f8cc0bb1897fbdc133327890766f6d77230c: Status 404 returned error can't find the container with id 28f9023d58fbf5309139e40cd123f8cc0bb1897fbdc133327890766f6d77230c Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.686309 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" podStartSLOduration=123.686278689 podStartE2EDuration="2m3.686278689s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:19.68335048 +0000 UTC m=+143.486327744" watchObservedRunningTime="2026-01-29 16:14:19.686278689 +0000 UTC m=+143.489255953" Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.695324 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:19 crc kubenswrapper[4895]: E0129 16:14:19.712544 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:20.212489295 +0000 UTC m=+144.015466559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.713257 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:19 crc kubenswrapper[4895]: E0129 16:14:19.713903 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:20.213887559 +0000 UTC m=+144.016864823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:19 crc kubenswrapper[4895]: W0129 16:14:19.715240 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbccbee59_7684_41ba_900e_a9e5bf164b80.slice/crio-bc193225b80f04ce18a6275b9158940ad527c79ae3d19d3464c1cbd61428585d WatchSource:0}: Error finding container bc193225b80f04ce18a6275b9158940ad527c79ae3d19d3464c1cbd61428585d: Status 404 returned error can't find the container with id bc193225b80f04ce18a6275b9158940ad527c79ae3d19d3464c1cbd61428585d Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.835852 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:19 crc kubenswrapper[4895]: E0129 16:14:19.846301 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:20.346256283 +0000 UTC m=+144.149233547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.856726 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq9bd" podStartSLOduration=123.856700639 podStartE2EDuration="2m3.856700639s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:19.833007281 +0000 UTC m=+143.635984555" watchObservedRunningTime="2026-01-29 16:14:19.856700639 +0000 UTC m=+143.659677913" Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.913843 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pcdgm"] Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.917269 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58"] Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.920237 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-t675c" podStartSLOduration=123.920212074 podStartE2EDuration="2m3.920212074s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:19.895625175 +0000 UTC m=+143.698602439" watchObservedRunningTime="2026-01-29 16:14:19.920212074 +0000 UTC m=+143.723189338" Jan 29 16:14:19 crc kubenswrapper[4895]: I0129 16:14:19.944277 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:19 crc kubenswrapper[4895]: E0129 16:14:19.944830 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:20.444809802 +0000 UTC m=+144.247787066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.020900 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-cv44f" podStartSLOduration=123.020854502 podStartE2EDuration="2m3.020854502s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.020790689 +0000 UTC m=+143.823767973" watchObservedRunningTime="2026-01-29 16:14:20.020854502 +0000 UTC m=+143.823831766" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.041754 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.052815 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:20 crc kubenswrapper[4895]: E0129 16:14:20.053333 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:20.553314135 +0000 UTC m=+144.356291389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.058339 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" podStartSLOduration=124.058312363 podStartE2EDuration="2m4.058312363s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.056581642 +0000 UTC m=+143.859558926" watchObservedRunningTime="2026-01-29 16:14:20.058312363 +0000 UTC m=+143.861289637" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.119017 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-p6ck2" podStartSLOduration=124.118993011 podStartE2EDuration="2m4.118993011s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.112470217 +0000 UTC m=+143.915447481" watchObservedRunningTime="2026-01-29 16:14:20.118993011 +0000 UTC m=+143.921970265" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.156296 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:20 crc kubenswrapper[4895]: E0129 16:14:20.156739 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:20.656721108 +0000 UTC m=+144.459698372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.208242 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j8jdx"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.218370 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.218434 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xc67m"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.257249 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:20 crc kubenswrapper[4895]: E0129 16:14:20.257660 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:20.757644023 +0000 UTC m=+144.560621287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.323023 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wb4kv"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.325342 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.340751 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.359285 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:20 crc kubenswrapper[4895]: E0129 16:14:20.359673 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:20.859659553 +0000 UTC m=+144.662636817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.375843 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx"] Jan 29 16:14:20 crc kubenswrapper[4895]: W0129 16:14:20.386198 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d2b65e1_1f81_46fd_8cbf_bf1797d8852d.slice/crio-280adb4a7d38727a3c1eaa69ab34733d1e5d4deb8f796b49416d0a029f325619 WatchSource:0}: Error finding container 280adb4a7d38727a3c1eaa69ab34733d1e5d4deb8f796b49416d0a029f325619: Status 404 returned error can't find the container with id 280adb4a7d38727a3c1eaa69ab34733d1e5d4deb8f796b49416d0a029f325619 Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.415365 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.440099 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bq54w" podStartSLOduration=124.440074156 podStartE2EDuration="2m4.440074156s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.408118244 +0000 UTC m=+144.211095508" watchObservedRunningTime="2026-01-29 16:14:20.440074156 +0000 UTC m=+144.243051420" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.470751 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:20 crc kubenswrapper[4895]: E0129 16:14:20.472041 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:20.972006837 +0000 UTC m=+144.774984101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.495096 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.543175 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" event={"ID":"a74a90ae-daab-4183-88d5-4cb49b9ed96e","Type":"ContainerStarted","Data":"d5018535a788604b825abeb196f1869fa33dd199f1276e7a3ba6ddbde20872c3"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.569358 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wb4kv" event={"ID":"72cc9605-664e-4897-93c5-b386c20517d1","Type":"ContainerStarted","Data":"a76088ce15a40c514ff62741987655902bd7134b8043b4c401b49a02bd4d1187"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.615790 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:20 crc kubenswrapper[4895]: E0129 16:14:20.617729 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:21.117712736 +0000 UTC m=+144.920690000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.622049 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5tcbt"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.631117 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.659153 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.660146 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z"] Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.693672 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" event={"ID":"8d5c56cc-9bff-4531-b57f-6a14c1d0802a","Type":"ContainerStarted","Data":"354ea98a7b8944495febb5cb5cdca214197d1c8e4f47189926bdcdaa42abfa2b"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.723574 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:20 crc kubenswrapper[4895]: E0129 16:14:20.724664 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:21.224623411 +0000 UTC m=+145.027600675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.737025 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" event={"ID":"46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a","Type":"ContainerStarted","Data":"4b6e4973f24b745e6b6c231002d02e6fca9f923fa7773d1670f52f9d8cb43649"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.759108 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-snck7" event={"ID":"9414a16a-b347-4f0e-af41-9ff94ea7cc05","Type":"ContainerStarted","Data":"e4f97ec6d3703a3dd69d34019e6d7c480de2cb8be275f9945018ac7f0fc354c2"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.760347 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.774446 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" event={"ID":"3acaf5b1-0f37-4157-85e0-926718993903","Type":"ContainerStarted","Data":"1297b61cd349325832bc825fcc9b66293a20e87af35056683e095f444d77baae"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.778691 4895 patch_prober.go:28] interesting pod/console-operator-58897d9998-snck7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.778741 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-snck7" podUID="9414a16a-b347-4f0e-af41-9ff94ea7cc05" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.784092 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-snck7" podStartSLOduration=124.78406816 podStartE2EDuration="2m4.78406816s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.782440561 +0000 UTC m=+144.585417835" watchObservedRunningTime="2026-01-29 16:14:20.78406816 +0000 UTC m=+144.587045424" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.785034 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vxd7m" podStartSLOduration=124.785030572 podStartE2EDuration="2m4.785030572s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.727287574 +0000 UTC m=+144.530264858" watchObservedRunningTime="2026-01-29 16:14:20.785030572 +0000 UTC m=+144.588007836" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.795548 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" event={"ID":"ae39172d-1e70-4e7c-8222-0cdaf8e645d6","Type":"ContainerStarted","Data":"1552840d0d3bfc3dd9662620943b4dcfb084eb0fdcd40e97135d818a362a0278"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.796225 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.799607 4895 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-pmkl9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" start-of-body= Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.799650 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" podUID="ae39172d-1e70-4e7c-8222-0cdaf8e645d6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.801755 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" event={"ID":"3d3c04e5-aec8-4352-9eae-ab6e388931eb","Type":"ContainerStarted","Data":"3986e6365015660c0be8243481f6033e62f40efb1081ea11682549f72fa8935c"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.802931 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc67m" event={"ID":"c00dfac3-8de9-4673-a6b4-2965a204accb","Type":"ContainerStarted","Data":"8a23819e76f87eb753df2a828b2326422dd5a175acd7f0f7f5ee65495518e782"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.805053 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" event={"ID":"d705d04a-92ce-41c8-91f8-c035a8dfa98a","Type":"ContainerStarted","Data":"dfea6157f06486692f4c3604e859fbbccffb073bec070615785009b0060f4554"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.809733 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" event={"ID":"a22956a0-592e-4592-9237-6c42acfa8b56","Type":"ContainerStarted","Data":"cb7cb84acc185afa3148131d1be60d59141ae03687edba79d689b1eddb80c52b"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.822975 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" event={"ID":"4761e6d8-f523-4c75-bfdb-76a523dfbede","Type":"ContainerStarted","Data":"03b6f7a24e9328a8324cc488a71003b6d16afb365aeb57785b7360656dcd698d"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.824131 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" podStartSLOduration=123.824110062 podStartE2EDuration="2m3.824110062s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.822488144 +0000 UTC m=+144.625465408" watchObservedRunningTime="2026-01-29 16:14:20.824110062 +0000 UTC m=+144.627087326" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.825002 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:20 crc kubenswrapper[4895]: E0129 16:14:20.825374 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:21.325360772 +0000 UTC m=+145.128338026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.838474 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" event={"ID":"f2305b0f-2d33-4c71-a687-0533fe97d02d","Type":"ContainerStarted","Data":"1d20299e45aea10743f5688fd68745a7b90d082582ad1bf4a9ba72c59699d870"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.847638 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" podStartSLOduration=123.847612245 podStartE2EDuration="2m3.847612245s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.846544369 +0000 UTC m=+144.649521633" watchObservedRunningTime="2026-01-29 16:14:20.847612245 +0000 UTC m=+144.650589509" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.855857 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" event={"ID":"9d07af33-8811-4683-883c-6e20ef713ea4","Type":"ContainerStarted","Data":"adc437d5bbbaa91c2ab2094ae23ef736404861d06c3ed767852173c308e2f1b9"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.855932 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.860681 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-s5x8l" event={"ID":"9c39ec96-c5ce-40f8-80b5-68baacd59516","Type":"ContainerStarted","Data":"61d0315b79f07c76e4fd72c0ce99c8f70ffd1fd458e964fa50c6efd0ebfa3e96"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.862361 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" podStartSLOduration=123.862339772 podStartE2EDuration="2m3.862339772s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.86139899 +0000 UTC m=+144.664376254" watchObservedRunningTime="2026-01-29 16:14:20.862339772 +0000 UTC m=+144.665317036" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.874694 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" event={"ID":"249177f9-7b1e-4d39-a400-e625862f53c3","Type":"ContainerStarted","Data":"f7ece3abb26d25a5f2ce425e121af6f6042a6869c4e32dfd071ee7bda9b270ba"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.877938 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" event={"ID":"bccbee59-7684-41ba-900e-a9e5bf164b80","Type":"ContainerStarted","Data":"bc193225b80f04ce18a6275b9158940ad527c79ae3d19d3464c1cbd61428585d"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.879093 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.879905 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqxcc" podStartSLOduration=123.879886524 podStartE2EDuration="2m3.879886524s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.879432294 +0000 UTC m=+144.682409568" watchObservedRunningTime="2026-01-29 16:14:20.879886524 +0000 UTC m=+144.682863788" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.897147 4895 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8xfpj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.897236 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" podUID="bccbee59-7684-41ba-900e-a9e5bf164b80" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.908103 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fcftn" event={"ID":"dced6426-2e21-4397-9074-92e06a03409a","Type":"ContainerStarted","Data":"851ed0b05fe05ef88b89fb0f2411234c858e506d381083998b7a98c6d447f149"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.921329 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58" event={"ID":"85eba540-f075-4ac1-9667-f462b08ebefa","Type":"ContainerStarted","Data":"0770d26be0d421f3762b29f5e58f3c3fc04459a5ad6a55e3e2db4312fe30b5b0"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.921940 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-s5x8l" podStartSLOduration=123.921918663 podStartE2EDuration="2m3.921918663s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.914992731 +0000 UTC m=+144.717969995" watchObservedRunningTime="2026-01-29 16:14:20.921918663 +0000 UTC m=+144.724895927" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.924234 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" event={"ID":"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d","Type":"ContainerStarted","Data":"280adb4a7d38727a3c1eaa69ab34733d1e5d4deb8f796b49416d0a029f325619"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.927379 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:20 crc kubenswrapper[4895]: E0129 16:14:20.931469 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:21.431444708 +0000 UTC m=+145.234421972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.938054 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" podStartSLOduration=124.938036433 podStartE2EDuration="2m4.938036433s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.935910012 +0000 UTC m=+144.738887276" watchObservedRunningTime="2026-01-29 16:14:20.938036433 +0000 UTC m=+144.741013697" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.950783 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" event={"ID":"01e50c09-6efb-44d3-8919-dcea4b9f6a72","Type":"ContainerStarted","Data":"a8b52098e40b4984b38de4da842798a9090cbcb0a8120e243a2eada9e880ece5"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.973907 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" podStartSLOduration=123.973880596 podStartE2EDuration="2m3.973880596s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:20.971317466 +0000 UTC m=+144.774294730" watchObservedRunningTime="2026-01-29 16:14:20.973880596 +0000 UTC m=+144.776857870" Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.982712 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" event={"ID":"ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98","Type":"ContainerStarted","Data":"867ce82e0f8188e1f3b2f8a23fdfde2346eea5ca83582f26f3542b2b57c51ef4"} Jan 29 16:14:20 crc kubenswrapper[4895]: I0129 16:14:20.996743 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" event={"ID":"06e1dd20-1df7-45ad-805b-64235d3521d7","Type":"ContainerStarted","Data":"28f9023d58fbf5309139e40cd123f8cc0bb1897fbdc133327890766f6d77230c"} Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.009111 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qt6sq" podStartSLOduration=125.009085694 podStartE2EDuration="2m5.009085694s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:21.004852325 +0000 UTC m=+144.807829609" watchObservedRunningTime="2026-01-29 16:14:21.009085694 +0000 UTC m=+144.812062958" Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.030877 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:21 crc kubenswrapper[4895]: E0129 16:14:21.032727 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:21.53270712 +0000 UTC m=+145.335684474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.056976 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-22gbx" event={"ID":"6fe9b00e-af32-4089-8a31-d0c1735d001f","Type":"ContainerStarted","Data":"912f43e51e3a2914111a01e8f638de65c2ff6de74c5d3bba6c1ed2d9d4db761b"} Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.072833 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" event={"ID":"164e7cd8-e106-4eab-81ea-bc00c73daf2a","Type":"ContainerStarted","Data":"864777e105f51df66bf623cf7146cc51b02ed2e5352529b3777361a072dcc2a1"} Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.072888 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" event={"ID":"164e7cd8-e106-4eab-81ea-bc00c73daf2a","Type":"ContainerStarted","Data":"78e137a78ef2f07f4bc34bfdd8dfbfbde04ef2240943b523278b5b5974f01404"} Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.074318 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.074357 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.116704 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-22gbx" podStartSLOduration=6.116670036 podStartE2EDuration="6.116670036s" podCreationTimestamp="2026-01-29 16:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:21.066972276 +0000 UTC m=+144.869949540" watchObservedRunningTime="2026-01-29 16:14:21.116670036 +0000 UTC m=+144.919647300" Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.142952 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.143917 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:21 crc kubenswrapper[4895]: E0129 16:14:21.144084 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:21.64405202 +0000 UTC m=+145.447029284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.167594 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.181172 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4xrqk" podStartSLOduration=125.181148843 podStartE2EDuration="2m5.181148843s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:21.179607987 +0000 UTC m=+144.982585261" watchObservedRunningTime="2026-01-29 16:14:21.181148843 +0000 UTC m=+144.984126107" Jan 29 16:14:21 crc kubenswrapper[4895]: E0129 16:14:21.181680 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:21.681659435 +0000 UTC m=+145.484636699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.262123 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:21 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:21 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:21 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.262198 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.270941 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:21 crc kubenswrapper[4895]: E0129 16:14:21.271473 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:21.771455818 +0000 UTC m=+145.574433072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.372717 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:21 crc kubenswrapper[4895]: E0129 16:14:21.373562 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:21.87354789 +0000 UTC m=+145.676525154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.478768 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:21 crc kubenswrapper[4895]: E0129 16:14:21.479177 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:21.979160075 +0000 UTC m=+145.782137339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.580024 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:21 crc kubenswrapper[4895]: E0129 16:14:21.580428 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:22.080411067 +0000 UTC m=+145.883388331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.680608 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:21 crc kubenswrapper[4895]: E0129 16:14:21.681470 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:22.181453585 +0000 UTC m=+145.984430839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.783032 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:21 crc kubenswrapper[4895]: E0129 16:14:21.783461 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:22.283446275 +0000 UTC m=+146.086423539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.884064 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:21 crc kubenswrapper[4895]: E0129 16:14:21.884533 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:22.384501123 +0000 UTC m=+146.187478387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.970956 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.971284 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:21 crc kubenswrapper[4895]: I0129 16:14:21.985171 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:21 crc kubenswrapper[4895]: E0129 16:14:21.985826 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:22.485801236 +0000 UTC m=+146.288778560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.088599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:22 crc kubenswrapper[4895]: E0129 16:14:22.089233 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:22.589209569 +0000 UTC m=+146.392186833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.108787 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" event={"ID":"ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98","Type":"ContainerStarted","Data":"5d6cd0bd2aaa308d234dab4e02f35d0871267cab4cff6ac00bee4458ba2804a1"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.136905 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" event={"ID":"d705d04a-92ce-41c8-91f8-c035a8dfa98a","Type":"ContainerStarted","Data":"e79cf0dd836ea82bdd2ff86207447054e4e525c44e2ff75e1423181ca5ca51db"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.137511 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.141013 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:22 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:22 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:22 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.141084 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.145919 4895 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nsjmx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.145997 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" podUID="d705d04a-92ce-41c8-91f8-c035a8dfa98a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.169877 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58" event={"ID":"85eba540-f075-4ac1-9667-f462b08ebefa","Type":"ContainerStarted","Data":"8e9e94641f5355ce280aebf85dfc52dfc11677bc9f37e73b94de7f26bfcbda5a"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.169937 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58" event={"ID":"85eba540-f075-4ac1-9667-f462b08ebefa","Type":"ContainerStarted","Data":"32c326dceabb59f7ff01a70813991ce6ee75c9e99705f64d84471a966b2b41f3"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.173278 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" event={"ID":"da56ae41-00cb-4345-a6be-1ceb542b8afe","Type":"ContainerStarted","Data":"ce3b60ed6622a0491d64cab367020660c45750d6cfaaf6ad2aefd96bcb5b7fd6"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.173321 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" event={"ID":"da56ae41-00cb-4345-a6be-1ceb542b8afe","Type":"ContainerStarted","Data":"54548b3770dc6002344c0ca7e1072ffaa4b0fb8c7dc8d23997bb84124e8aa632"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.186860 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" podStartSLOduration=125.186837576 podStartE2EDuration="2m5.186837576s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.184728016 +0000 UTC m=+145.987705290" watchObservedRunningTime="2026-01-29 16:14:22.186837576 +0000 UTC m=+145.989814840" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.187653 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" event={"ID":"01f63105-32cc-4bc3-a677-8d0a7d967af2","Type":"ContainerStarted","Data":"086d0b446eb7b9f5672b54025faacdba9fbc12b02b66c92a11871a7053024149"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.197297 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" event={"ID":"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d","Type":"ContainerStarted","Data":"1f6661932c7603d89fe82b142f3ec7d6e1987c006c0da9d8b7f34dd91e311f26"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.198755 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:22 crc kubenswrapper[4895]: E0129 16:14:22.199390 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:22.699369161 +0000 UTC m=+146.502346415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.215124 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wb4kv" event={"ID":"72cc9605-664e-4897-93c5-b386c20517d1","Type":"ContainerStarted","Data":"09ad74e0d69aecec3bdd8604d892a963ef815288be2d459932a8017e5764e108"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.221272 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" event={"ID":"06e1dd20-1df7-45ad-805b-64235d3521d7","Type":"ContainerStarted","Data":"a3fa08c1492876963f9b7002eb1c5af83fa6e4a48e6a16051730c4b66a9d5222"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.227680 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5tcbt" event={"ID":"253b52d8-4b46-4b35-a97d-8dd6f1a00cb0","Type":"ContainerStarted","Data":"070336d31817260ae2ed5e0fa9a884dbf7915e219a8173eeec54f67b40143b40"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.227745 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5tcbt" event={"ID":"253b52d8-4b46-4b35-a97d-8dd6f1a00cb0","Type":"ContainerStarted","Data":"d467ac0dcd439fd2a5a97d62f9a78513e77d01c7cfdf5358ef2241c7b5409c90"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.227968 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" podStartSLOduration=126.227950943 podStartE2EDuration="2m6.227950943s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.22439309 +0000 UTC m=+146.027370354" watchObservedRunningTime="2026-01-29 16:14:22.227950943 +0000 UTC m=+146.030928207" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.231640 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw" event={"ID":"c2b62e00-db4e-4ce9-8dd1-717159043f83","Type":"ContainerStarted","Data":"055cf33b0d37386870696d4f54ef8a5c6484e4865acdff71d1c782f9f2d9b98a"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.231688 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw" event={"ID":"c2b62e00-db4e-4ce9-8dd1-717159043f83","Type":"ContainerStarted","Data":"7f444683080ee475bdfc53872e895a4c721821f7ca47e3a890f81bf9a6a87b32"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.233038 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" event={"ID":"8d5c56cc-9bff-4531-b57f-6a14c1d0802a","Type":"ContainerStarted","Data":"6da8f81b4f5a33f1898160580d51d478d3b18fba50f11bc8531819039bf923c3"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.249048 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-f945r" podStartSLOduration=125.249023829 podStartE2EDuration="2m5.249023829s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.248352964 +0000 UTC m=+146.051330228" watchObservedRunningTime="2026-01-29 16:14:22.249023829 +0000 UTC m=+146.052001103" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.262810 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" event={"ID":"2b221011-ac95-4cb7-8374-a7b61f91ce72","Type":"ContainerStarted","Data":"b9f971f1f3ef6c787df0138fa25c1331c0da2fdfdced1deadac0a4895839c6ae"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.262929 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" event={"ID":"2b221011-ac95-4cb7-8374-a7b61f91ce72","Type":"ContainerStarted","Data":"5f4c651321d9eda0f920b0ac11339096af2403435feaf45bbb5a7bc3a4ce5b5b"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.304185 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:22 crc kubenswrapper[4895]: E0129 16:14:22.306474 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:22.80644952 +0000 UTC m=+146.609426784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.308305 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" event={"ID":"d08826f7-331b-4ed8-98c2-2faaab99d384","Type":"ContainerStarted","Data":"0c7e479ac497a50cf4e444506e089635fa4f5d7bf180a22a856efbc79942ac8b"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.308365 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" event={"ID":"d08826f7-331b-4ed8-98c2-2faaab99d384","Type":"ContainerStarted","Data":"3367f276e7c3c2d12feafb11a0742b7999f04a2da5ab502a1056d018ca3cbace"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.309454 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.320149 4895 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sx29z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.320211 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" podUID="d08826f7-331b-4ed8-98c2-2faaab99d384" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.346996 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" event={"ID":"bccbee59-7684-41ba-900e-a9e5bf164b80","Type":"ContainerStarted","Data":"4a5d0b7dd97ce1312c63db0a360cd8dd009e612d19bb1c1f26f844182e890717"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.348873 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dp6dw" podStartSLOduration=125.348821818 podStartE2EDuration="2m5.348821818s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.303286936 +0000 UTC m=+146.106264220" watchObservedRunningTime="2026-01-29 16:14:22.348821818 +0000 UTC m=+146.151799082" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.351002 4895 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8xfpj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.351076 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" podUID="bccbee59-7684-41ba-900e-a9e5bf164b80" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.351496 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5tcbt" podStartSLOduration=7.35148732 podStartE2EDuration="7.35148732s" podCreationTimestamp="2026-01-29 16:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.346812891 +0000 UTC m=+146.149790155" watchObservedRunningTime="2026-01-29 16:14:22.35148732 +0000 UTC m=+146.154464584" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.381569 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" event={"ID":"249177f9-7b1e-4d39-a400-e625862f53c3","Type":"ContainerStarted","Data":"fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.383066 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.386346 4895 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-j8jdx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.386434 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" podUID="249177f9-7b1e-4d39-a400-e625862f53c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.404162 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" event={"ID":"3d3c04e5-aec8-4352-9eae-ab6e388931eb","Type":"ContainerStarted","Data":"40c514c6c8e378e86c9889e277b422052edbcfa00b4df5d0f2ea48d1444e3507"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.405612 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:22 crc kubenswrapper[4895]: E0129 16:14:22.407513 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:22.907487227 +0000 UTC m=+146.710464551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.418556 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" event={"ID":"46dfaa48-e2cb-47b9-9c47-0ff6b9ae3d5a","Type":"ContainerStarted","Data":"6cfeb621fcff04e65090e391645f203531fe4d182e5cce90629545470b87ccf5"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.448523 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pcdgm" event={"ID":"f2305b0f-2d33-4c71-a687-0533fe97d02d","Type":"ContainerStarted","Data":"0d28e16fddaea2b5a0941f5e01308390b71825c3502d723c1520794217a93d7b"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.468473 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" podStartSLOduration=125.468449072 podStartE2EDuration="2m5.468449072s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.467278565 +0000 UTC m=+146.270255849" watchObservedRunningTime="2026-01-29 16:14:22.468449072 +0000 UTC m=+146.271426336" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.469238 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-jjgzj" podStartSLOduration=125.469231301 podStartE2EDuration="2m5.469231301s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.392481265 +0000 UTC m=+146.195458529" watchObservedRunningTime="2026-01-29 16:14:22.469231301 +0000 UTC m=+146.272208565" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.479666 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" event={"ID":"a22956a0-592e-4592-9237-6c42acfa8b56","Type":"ContainerStarted","Data":"6ae2beddc4b35fee1043588a6377b11e40bbe5dcbf286278ecb64f2f5e9a0181"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.488853 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-snck7" event={"ID":"9414a16a-b347-4f0e-af41-9ff94ea7cc05","Type":"ContainerStarted","Data":"f7ba402e0da32a558e559766bd55864444a54ace7222a359ae5a76bda7d43d07"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.490493 4895 patch_prober.go:28] interesting pod/console-operator-58897d9998-snck7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.490546 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-snck7" podUID="9414a16a-b347-4f0e-af41-9ff94ea7cc05" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.511481 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:22 crc kubenswrapper[4895]: E0129 16:14:22.513289 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:23.013264937 +0000 UTC m=+146.816242201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.519909 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" event={"ID":"96f8638c-32af-4ed6-87de-790aa042e15c","Type":"ContainerStarted","Data":"131c440d6a95a4aca0e7ad748fe091ac3ca8ec4654878c6914d6bb0a121952d2"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.519961 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" event={"ID":"96f8638c-32af-4ed6-87de-790aa042e15c","Type":"ContainerStarted","Data":"0d158df3703017fc67a0dc98bd7634ed6bfd47fcea091a14024f72647af3049f"} Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.538552 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" podStartSLOduration=125.538534531 podStartE2EDuration="2m5.538534531s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.537590679 +0000 UTC m=+146.340567943" watchObservedRunningTime="2026-01-29 16:14:22.538534531 +0000 UTC m=+146.341511795" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.617545 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:22 crc kubenswrapper[4895]: E0129 16:14:22.627337 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:23.12731567 +0000 UTC m=+146.930292934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.627612 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-w4jl6" podStartSLOduration=125.627589607 podStartE2EDuration="2m5.627589607s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.609202144 +0000 UTC m=+146.412179418" watchObservedRunningTime="2026-01-29 16:14:22.627589607 +0000 UTC m=+146.430566871" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.627911 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5vw7" podStartSLOduration=125.627893714 podStartE2EDuration="2m5.627893714s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.575141843 +0000 UTC m=+146.378119117" watchObservedRunningTime="2026-01-29 16:14:22.627893714 +0000 UTC m=+146.430870978" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.719599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:22 crc kubenswrapper[4895]: E0129 16:14:22.720135 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:23.220115524 +0000 UTC m=+147.023092788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.821304 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:22 crc kubenswrapper[4895]: E0129 16:14:22.821713 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:23.321698624 +0000 UTC m=+147.124675888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.881640 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r7srz" podStartSLOduration=125.881614374 podStartE2EDuration="2m5.881614374s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.673314932 +0000 UTC m=+146.476292206" watchObservedRunningTime="2026-01-29 16:14:22.881614374 +0000 UTC m=+146.684591638" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.882714 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" podStartSLOduration=125.8827081 podStartE2EDuration="2m5.8827081s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:22.88016852 +0000 UTC m=+146.683145784" watchObservedRunningTime="2026-01-29 16:14:22.8827081 +0000 UTC m=+146.685685374" Jan 29 16:14:22 crc kubenswrapper[4895]: I0129 16:14:22.924460 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:22 crc kubenswrapper[4895]: E0129 16:14:22.924879 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:23.424847621 +0000 UTC m=+147.227824885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.026083 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.026485 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:23.526470732 +0000 UTC m=+147.329447986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.056947 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.127458 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.127691 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:23.627655414 +0000 UTC m=+147.430632678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.127904 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.128264 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:23.628248778 +0000 UTC m=+147.431226042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.131746 4895 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-mzj8m container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.131805 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" podUID="9d07af33-8811-4683-883c-6e20ef713ea4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.131943 4895 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-mzj8m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.132023 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" podUID="9d07af33-8811-4683-883c-6e20ef713ea4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.137778 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:23 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:23 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:23 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.137817 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.228907 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.229425 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:23.729406068 +0000 UTC m=+147.532383332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.237318 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.334106 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.334603 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:23.834579942 +0000 UTC m=+147.637557206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.434795 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.435387 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:23.935367903 +0000 UTC m=+147.738345167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.464756 4895 csr.go:261] certificate signing request csr-smrfm is approved, waiting to be issued Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.484290 4895 csr.go:257] certificate signing request csr-smrfm is issued Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.523873 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" event={"ID":"7d2b65e1-1f81-46fd-8cbf-bf1797d8852d","Type":"ContainerStarted","Data":"16ea1f271f6aeda4634468b60a3174e2004eb741dc6a036c94fbb0c6462b6cbb"} Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.526115 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wb4kv" event={"ID":"72cc9605-664e-4897-93c5-b386c20517d1","Type":"ContainerStarted","Data":"b7bd688812c4f340ab568feaf47d4c69bcccb2da0e10e0af31f1ef13f95d92ac"} Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.526246 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.528132 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" event={"ID":"3d3c04e5-aec8-4352-9eae-ab6e388931eb","Type":"ContainerStarted","Data":"67249160db233f76069a93251c268a79ff6760fec69f7f73b1db21414f1ad5c4"} Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.530081 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dqlvl" event={"ID":"96f8638c-32af-4ed6-87de-790aa042e15c","Type":"ContainerStarted","Data":"dfb5abc77cd0e0001ffe138023f7e002d5d62bdc7de20bcf892665eed4722bda"} Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.531914 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc67m" event={"ID":"c00dfac3-8de9-4673-a6b4-2965a204accb","Type":"ContainerStarted","Data":"ba71a63a2eefd9d6d0dddb1c9e323d658fbe89a4a4d92f50b30e508410a84944"} Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.535218 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" event={"ID":"01f63105-32cc-4bc3-a677-8d0a7d967af2","Type":"ContainerStarted","Data":"0a31c082ea7e76128410eee58753188d918331b1225da8537b8f68c15cfbdf72"} Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.537062 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.537649 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" event={"ID":"ce4fcfdd-879e-4a07-9aa7-e5ff7ffc8d98","Type":"ContainerStarted","Data":"17f77816e1d91bc8b99ceb0680d2793ed7e70fe151bea4f72724d753c65fb5f8"} Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.537715 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.037693172 +0000 UTC m=+147.840670506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.538442 4895 patch_prober.go:28] interesting pod/console-operator-58897d9998-snck7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.538489 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-snck7" podUID="9414a16a-b347-4f0e-af41-9ff94ea7cc05" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.538497 4895 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8xfpj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.538537 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" podUID="bccbee59-7684-41ba-900e-a9e5bf164b80" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.538635 4895 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nsjmx container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.538694 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" podUID="d705d04a-92ce-41c8-91f8-c035a8dfa98a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.538908 4895 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-j8jdx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.538952 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" podUID="249177f9-7b1e-4d39-a400-e625862f53c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.539097 4895 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sx29z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.539129 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" podUID="d08826f7-331b-4ed8-98c2-2faaab99d384" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.567977 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fjk5z" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.569174 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bmmgq" podStartSLOduration=126.569161102 podStartE2EDuration="2m6.569161102s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:23.567988694 +0000 UTC m=+147.370965958" watchObservedRunningTime="2026-01-29 16:14:23.569161102 +0000 UTC m=+147.372138366" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.637884 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" podStartSLOduration=126.637849698 podStartE2EDuration="2m6.637849698s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:23.63495063 +0000 UTC m=+147.437927914" watchObservedRunningTime="2026-01-29 16:14:23.637849698 +0000 UTC m=+147.440826962" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.639496 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.639763 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.139723952 +0000 UTC m=+147.942701216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.641341 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.644178 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.144141536 +0000 UTC m=+147.947118910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.684264 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7sbh6" podStartSLOduration=126.684245699 podStartE2EDuration="2m6.684245699s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:23.681543786 +0000 UTC m=+147.484521050" watchObservedRunningTime="2026-01-29 16:14:23.684245699 +0000 UTC m=+147.487222963" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.714794 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wb4kv" podStartSLOduration=8.714767368 podStartE2EDuration="8.714767368s" podCreationTimestamp="2026-01-29 16:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:23.714037441 +0000 UTC m=+147.517014725" watchObservedRunningTime="2026-01-29 16:14:23.714767368 +0000 UTC m=+147.517744632" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.743671 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.743989 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.243908794 +0000 UTC m=+148.046886058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.744285 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.744675 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.244660361 +0000 UTC m=+148.047637625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.789044 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g9q58" podStartSLOduration=126.789017075 podStartE2EDuration="2m6.789017075s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:23.745990792 +0000 UTC m=+147.548968076" watchObservedRunningTime="2026-01-29 16:14:23.789017075 +0000 UTC m=+147.591994349" Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.849366 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.849637 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.34960394 +0000 UTC m=+148.152581204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.849768 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.850306 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.350287097 +0000 UTC m=+148.153264441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.951256 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.951617 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.451476508 +0000 UTC m=+148.254453772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:23 crc kubenswrapper[4895]: I0129 16:14:23.951757 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:23 crc kubenswrapper[4895]: E0129 16:14:23.952117 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.452101692 +0000 UTC m=+148.255078956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.052925 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.053428 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.553392296 +0000 UTC m=+148.356369560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.053519 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.054009 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.5539886 +0000 UTC m=+148.356965864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.137914 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:24 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:24 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:24 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.137984 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.155741 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.156201 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.656182965 +0000 UTC m=+148.459160229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.257374 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.257752 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.757737924 +0000 UTC m=+148.560715188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.358303 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.358894 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.858873634 +0000 UTC m=+148.661850898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.460555 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.461332 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:24.961309524 +0000 UTC m=+148.764286788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.486334 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 16:09:23 +0000 UTC, rotation deadline is 2026-12-18 00:20:07.319096861 +0000 UTC Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.486384 4895 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7736h5m42.832716077s for next certificate rotation Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.550274 4895 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-j8jdx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.550371 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" podUID="249177f9-7b1e-4d39-a400-e625862f53c3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.550583 4895 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sx29z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.550637 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" podUID="d08826f7-331b-4ed8-98c2-2faaab99d384" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.551366 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.562361 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.562558 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.062525276 +0000 UTC m=+148.865502530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.562809 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.563208 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.063193441 +0000 UTC m=+148.866170695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.663670 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.664935 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.164901564 +0000 UTC m=+148.967878828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.766044 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.766463 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.266446094 +0000 UTC m=+149.069423358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.867574 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.867829 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.367790878 +0000 UTC m=+149.170768142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.867991 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.868384 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.368369312 +0000 UTC m=+149.171346576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.969477 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.969692 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.969765 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:14:24 crc kubenswrapper[4895]: E0129 16:14:24.970911 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.470882294 +0000 UTC m=+149.273859558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:24 crc kubenswrapper[4895]: I0129 16:14:24.975005 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:24.997559 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.071752 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.071811 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.071881 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.072254 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.572240999 +0000 UTC m=+149.375218263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.080643 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.105334 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.141008 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:25 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:25 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:25 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.141075 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.176461 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.176902 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.676885861 +0000 UTC m=+149.479863125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.254891 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.266643 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.278261 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.278694 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.778679626 +0000 UTC m=+149.581656890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.281252 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.379265 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.379693 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.879674822 +0000 UTC m=+149.682652086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.480858 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.481265 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:25.981249943 +0000 UTC m=+149.784227207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.583626 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.583830 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.083797105 +0000 UTC m=+149.886774369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.584082 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.584490 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.084475202 +0000 UTC m=+149.887452456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.589629 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc67m" event={"ID":"c00dfac3-8de9-4673-a6b4-2965a204accb","Type":"ContainerStarted","Data":"eeb66a8a30ff475a7ce327af5d4693fd3371b5466253fe4a617834a961c0e885"} Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.629570 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" podStartSLOduration=128.629547012 podStartE2EDuration="2m8.629547012s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:23.842577315 +0000 UTC m=+147.645554579" watchObservedRunningTime="2026-01-29 16:14:25.629547012 +0000 UTC m=+149.432524276" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.632943 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4slsz"] Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.634102 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.639361 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.686013 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.686256 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.186224736 +0000 UTC m=+149.989202000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.686309 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.686390 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-catalog-content\") pod \"community-operators-4slsz\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.686491 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5ts9\" (UniqueName: \"kubernetes.io/projected/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-kube-api-access-b5ts9\") pod \"community-operators-4slsz\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.686830 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-utilities\") pod \"community-operators-4slsz\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.688557 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.1885458 +0000 UTC m=+149.991523054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.717261 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4slsz"] Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.788901 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.789292 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-utilities\") pod \"community-operators-4slsz\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.789374 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-catalog-content\") pod \"community-operators-4slsz\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.789396 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5ts9\" (UniqueName: \"kubernetes.io/projected/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-kube-api-access-b5ts9\") pod \"community-operators-4slsz\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.790191 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-catalog-content\") pod \"community-operators-4slsz\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.790285 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.290268684 +0000 UTC m=+150.093245948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.790836 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-utilities\") pod \"community-operators-4slsz\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.834196 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5ts9\" (UniqueName: \"kubernetes.io/projected/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-kube-api-access-b5ts9\") pod \"community-operators-4slsz\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.895142 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.896205 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.396189336 +0000 UTC m=+150.199166600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.963546 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:14:25 crc kubenswrapper[4895]: I0129 16:14:25.996771 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:25 crc kubenswrapper[4895]: E0129 16:14:25.997147 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.497126931 +0000 UTC m=+150.300104195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.022097 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t8mm9"] Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.030422 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.044342 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t8mm9"] Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.098217 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-catalog-content\") pod \"community-operators-t8mm9\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.098489 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnvlx\" (UniqueName: \"kubernetes.io/projected/289a9c3e-eaaf-434d-87d3-24097a01057e-kube-api-access-cnvlx\") pod \"community-operators-t8mm9\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.098654 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-utilities\") pod \"community-operators-t8mm9\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.098743 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:26 crc kubenswrapper[4895]: E0129 16:14:26.099255 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.599225053 +0000 UTC m=+150.402202317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.130075 4895 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.138657 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mzj8m" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.139827 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:26 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:26 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:26 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.139916 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.200167 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.200393 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnvlx\" (UniqueName: \"kubernetes.io/projected/289a9c3e-eaaf-434d-87d3-24097a01057e-kube-api-access-cnvlx\") pod \"community-operators-t8mm9\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.200472 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-utilities\") pod \"community-operators-t8mm9\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.200561 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-catalog-content\") pod \"community-operators-t8mm9\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.201111 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-catalog-content\") pod \"community-operators-t8mm9\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:14:26 crc kubenswrapper[4895]: E0129 16:14:26.201212 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.701192373 +0000 UTC m=+150.504169637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.201806 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-utilities\") pod \"community-operators-t8mm9\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.236855 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4tj74"] Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.257713 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnvlx\" (UniqueName: \"kubernetes.io/projected/289a9c3e-eaaf-434d-87d3-24097a01057e-kube-api-access-cnvlx\") pod \"community-operators-t8mm9\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.269200 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.271674 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.295288 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tj74"] Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.307198 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-catalog-content\") pod \"certified-operators-4tj74\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.307389 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9lrm\" (UniqueName: \"kubernetes.io/projected/044df5fd-0d96-4aab-b09e-24870d0e4bd9-kube-api-access-x9lrm\") pod \"certified-operators-4tj74\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.307734 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.307804 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-utilities\") pod \"certified-operators-4tj74\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:14:26 crc kubenswrapper[4895]: E0129 16:14:26.308304 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.808286082 +0000 UTC m=+150.611263346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:26 crc kubenswrapper[4895]: W0129 16:14:26.363802 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-223a2e458837760e774271334b7fc405b4686cf8240d8dc8e0ce33adbb5c936c WatchSource:0}: Error finding container 223a2e458837760e774271334b7fc405b4686cf8240d8dc8e0ce33adbb5c936c: Status 404 returned error can't find the container with id 223a2e458837760e774271334b7fc405b4686cf8240d8dc8e0ce33adbb5c936c Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.409705 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:26 crc kubenswrapper[4895]: E0129 16:14:26.410011 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.909972185 +0000 UTC m=+150.712949449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.410069 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9lrm\" (UniqueName: \"kubernetes.io/projected/044df5fd-0d96-4aab-b09e-24870d0e4bd9-kube-api-access-x9lrm\") pod \"certified-operators-4tj74\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.410241 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.410271 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-utilities\") pod \"certified-operators-4tj74\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.410363 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-catalog-content\") pod \"certified-operators-4tj74\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.411058 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-catalog-content\") pod \"certified-operators-4tj74\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:14:26 crc kubenswrapper[4895]: E0129 16:14:26.411728 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:14:26.911716616 +0000 UTC m=+150.714693880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9v2kn" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.412048 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-utilities\") pod \"certified-operators-4tj74\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.429179 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7nhpc"] Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.431322 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.439804 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9lrm\" (UniqueName: \"kubernetes.io/projected/044df5fd-0d96-4aab-b09e-24870d0e4bd9-kube-api-access-x9lrm\") pod \"certified-operators-4tj74\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.445181 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7nhpc"] Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.469592 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.511612 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.511887 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-utilities\") pod \"certified-operators-7nhpc\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.511945 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-catalog-content\") pod \"certified-operators-7nhpc\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.511986 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7v95\" (UniqueName: \"kubernetes.io/projected/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-kube-api-access-m7v95\") pod \"certified-operators-7nhpc\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:14:26 crc kubenswrapper[4895]: E0129 16:14:26.512116 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:14:27.012099018 +0000 UTC m=+150.815076282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.554795 4895 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T16:14:26.130106219Z","Handler":null,"Name":""} Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.567132 4895 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.567199 4895 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.615564 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7v95\" (UniqueName: \"kubernetes.io/projected/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-kube-api-access-m7v95\") pod \"certified-operators-7nhpc\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.615660 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-utilities\") pod \"certified-operators-7nhpc\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.615694 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.615717 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-catalog-content\") pod \"certified-operators-7nhpc\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.616241 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-catalog-content\") pod \"certified-operators-7nhpc\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.616906 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-utilities\") pod \"certified-operators-7nhpc\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.636427 4895 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.636469 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.643309 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4840b2a2bb05e49a7a656aae57ff6b64303ed59bafd3724232761dae2f9d6471"} Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.656331 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7v95\" (UniqueName: \"kubernetes.io/projected/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-kube-api-access-m7v95\") pod \"certified-operators-7nhpc\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.657603 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc67m" event={"ID":"c00dfac3-8de9-4673-a6b4-2965a204accb","Type":"ContainerStarted","Data":"65dc8ea792647147d9b42a136e866c1078a0198d4ca21a22bdde9fb8947973f6"} Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.657643 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xc67m" event={"ID":"c00dfac3-8de9-4673-a6b4-2965a204accb","Type":"ContainerStarted","Data":"a5a0e255fe355596690fe33219691c7978699b3c334f23079891270a5cebe16d"} Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.661454 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"223a2e458837760e774271334b7fc405b4686cf8240d8dc8e0ce33adbb5c936c"} Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.662577 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.694779 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9v2kn\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.702774 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xc67m" podStartSLOduration=11.702744094 podStartE2EDuration="11.702744094s" podCreationTimestamp="2026-01-29 16:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:26.68771489 +0000 UTC m=+150.490692164" watchObservedRunningTime="2026-01-29 16:14:26.702744094 +0000 UTC m=+150.505721358" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.716597 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.717513 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.721149 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.721565 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6babd62-a7db-43b5-bb64-9a71f3b7d47f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.721749 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6babd62-a7db-43b5-bb64-9a71f3b7d47f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.723044 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.723240 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.727285 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.778824 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.784520 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.825660 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6babd62-a7db-43b5-bb64-9a71f3b7d47f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.839199 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.845542 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6babd62-a7db-43b5-bb64-9a71f3b7d47f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.845839 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e6babd62-a7db-43b5-bb64-9a71f3b7d47f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.857830 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e6babd62-a7db-43b5-bb64-9a71f3b7d47f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:14:26 crc kubenswrapper[4895]: I0129 16:14:26.894761 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4slsz"] Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.049739 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.084957 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.143584 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.143636 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.146895 4895 patch_prober.go:28] interesting pod/console-f9d7485db-p6ck2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.146957 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-p6ck2" podUID="6dd34441-4294-4e90-9f2d-909c5aecdff7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.148180 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.148215 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.157151 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:27 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:27 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:27 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.157438 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.164772 4895 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6bcff container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 16:14:27 crc kubenswrapper[4895]: [+]log ok Jan 29 16:14:27 crc kubenswrapper[4895]: [+]etcd ok Jan 29 16:14:27 crc kubenswrapper[4895]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 16:14:27 crc kubenswrapper[4895]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 16:14:27 crc kubenswrapper[4895]: [+]poststarthook/max-in-flight-filter ok Jan 29 16:14:27 crc kubenswrapper[4895]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 16:14:27 crc kubenswrapper[4895]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 16:14:27 crc kubenswrapper[4895]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 16:14:27 crc kubenswrapper[4895]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 29 16:14:27 crc kubenswrapper[4895]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 16:14:27 crc kubenswrapper[4895]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 16:14:27 crc kubenswrapper[4895]: [+]poststarthook/openshift.io-startinformers ok Jan 29 16:14:27 crc kubenswrapper[4895]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 16:14:27 crc kubenswrapper[4895]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 16:14:27 crc kubenswrapper[4895]: livez check failed Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.164853 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" podUID="01f63105-32cc-4bc3-a677-8d0a7d967af2" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.191634 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tj74"] Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.255500 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t8mm9"] Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.311364 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7nhpc"] Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.334243 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.334578 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.335281 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.335299 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.347390 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9v2kn"] Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.548039 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.670005 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9191f6a7afc81c96b416abbeb31722e670dca8498c84ab17e4b35a02d683bda5"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.670081 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f1878a69b37243c560d0bd3788d96b504124d03eeec641ea476adf93a18fdf89"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.670284 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.672522 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" event={"ID":"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e","Type":"ContainerStarted","Data":"ce601eaf30554e24e0d15851730c5113bad1a0e153a1f80787d7027ba3a8bb16"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.674246 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d62cc38d4bb53876ae01cd30ee135f04e60f3333f5005da14526f031c14dcd6c"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.682517 4895 generic.go:334] "Generic (PLEG): container finished" podID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerID="d6278f9edfea55e7260d3fb076bff45470b2c32f849f1c63f501f76b9a913240" exitCode=0 Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.682618 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4slsz" event={"ID":"f5ead4e3-bcde-4c47-8173-9f7773f0a45f","Type":"ContainerDied","Data":"d6278f9edfea55e7260d3fb076bff45470b2c32f849f1c63f501f76b9a913240"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.682697 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4slsz" event={"ID":"f5ead4e3-bcde-4c47-8173-9f7773f0a45f","Type":"ContainerStarted","Data":"4b099004196c328a2f02bedee55265d20bdfd31530d92b7b533c9b69b81120b8"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.684225 4895 generic.go:334] "Generic (PLEG): container finished" podID="289a9c3e-eaaf-434d-87d3-24097a01057e" containerID="413d232afee33b9bb3f2803f5bf0fd749535c80cbdd7bad686fce1adf799ce83" exitCode=0 Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.684289 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8mm9" event={"ID":"289a9c3e-eaaf-434d-87d3-24097a01057e","Type":"ContainerDied","Data":"413d232afee33b9bb3f2803f5bf0fd749535c80cbdd7bad686fce1adf799ce83"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.684324 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8mm9" event={"ID":"289a9c3e-eaaf-434d-87d3-24097a01057e","Type":"ContainerStarted","Data":"a60efd50206675aba52fa33471a6041f265ffb906f9ffde4fbaafb8292cfe857"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.684390 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.688621 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"85ef01a157d2f8050c412008bfa5adaeb27a2c9824cf12f0fec6c317d2f55e71"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.693001 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6babd62-a7db-43b5-bb64-9a71f3b7d47f","Type":"ContainerStarted","Data":"a491b25127e981e099c761a79cb8f2e25c77c6251da3702cc16203d2f4e88340"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.694339 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhpc" event={"ID":"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81","Type":"ContainerStarted","Data":"73a3878983dd7ee858f222fef7c58fbd58d74787af531d2642a620c110900bb3"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.695485 4895 generic.go:334] "Generic (PLEG): container finished" podID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" containerID="57e0597f2d75f437f6c1ae015335c7016f5fe859f5c43d46364211115f386101" exitCode=0 Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.697608 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tj74" event={"ID":"044df5fd-0d96-4aab-b09e-24870d0e4bd9","Type":"ContainerDied","Data":"57e0597f2d75f437f6c1ae015335c7016f5fe859f5c43d46364211115f386101"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.697647 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tj74" event={"ID":"044df5fd-0d96-4aab-b09e-24870d0e4bd9","Type":"ContainerStarted","Data":"76567b2220c54295fe508af45a7c9c5936e539dde4b78ad517c6389e8fced071"} Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.823999 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.824452 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.854537 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-snck7" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.876015 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m467m"] Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.877253 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.893087 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8xfpj" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.894006 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.908779 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m467m"] Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.965152 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-catalog-content\") pod \"redhat-marketplace-m467m\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.965229 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2zq\" (UniqueName: \"kubernetes.io/projected/8c405329-c382-44a2-8c9d-74976164f122-kube-api-access-8l2zq\") pod \"redhat-marketplace-m467m\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:14:27 crc kubenswrapper[4895]: I0129 16:14:27.965290 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-utilities\") pod \"redhat-marketplace-m467m\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.081883 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-catalog-content\") pod \"redhat-marketplace-m467m\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.082257 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l2zq\" (UniqueName: \"kubernetes.io/projected/8c405329-c382-44a2-8c9d-74976164f122-kube-api-access-8l2zq\") pod \"redhat-marketplace-m467m\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.082464 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-utilities\") pod \"redhat-marketplace-m467m\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.093934 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-utilities\") pod \"redhat-marketplace-m467m\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.099224 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-catalog-content\") pod \"redhat-marketplace-m467m\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.138995 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.180096 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:28 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:28 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:28 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.180159 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.194525 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l2zq\" (UniqueName: \"kubernetes.io/projected/8c405329-c382-44a2-8c9d-74976164f122-kube-api-access-8l2zq\") pod \"redhat-marketplace-m467m\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.234539 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p8lq7"] Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.236245 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.238813 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sx29z" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.242024 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.242971 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nsjmx" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.246633 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p8lq7"] Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.256742 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.391811 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-catalog-content\") pod \"redhat-marketplace-p8lq7\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.391943 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkxs8\" (UniqueName: \"kubernetes.io/projected/408c9cd8-1d91-4a4b-9e57-748578b4704e-kube-api-access-kkxs8\") pod \"redhat-marketplace-p8lq7\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.392002 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-utilities\") pod \"redhat-marketplace-p8lq7\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.494489 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkxs8\" (UniqueName: \"kubernetes.io/projected/408c9cd8-1d91-4a4b-9e57-748578b4704e-kube-api-access-kkxs8\") pod \"redhat-marketplace-p8lq7\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.494547 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-utilities\") pod \"redhat-marketplace-p8lq7\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.494610 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-catalog-content\") pod \"redhat-marketplace-p8lq7\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.495146 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-catalog-content\") pod \"redhat-marketplace-p8lq7\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.495386 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-utilities\") pod \"redhat-marketplace-p8lq7\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.517982 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkxs8\" (UniqueName: \"kubernetes.io/projected/408c9cd8-1d91-4a4b-9e57-748578b4704e-kube-api-access-kkxs8\") pod \"redhat-marketplace-p8lq7\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.568635 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.751801 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" event={"ID":"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e","Type":"ContainerStarted","Data":"c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e"} Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.752435 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.752759 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m467m"] Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.755788 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6babd62-a7db-43b5-bb64-9a71f3b7d47f","Type":"ContainerStarted","Data":"d31e6e74f4ee155712cb0ea328b1a7bb57d660f75fd355b86e7c9459accf20a2"} Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.759852 4895 generic.go:334] "Generic (PLEG): container finished" podID="da56ae41-00cb-4345-a6be-1ceb542b8afe" containerID="ce3b60ed6622a0491d64cab367020660c45750d6cfaaf6ad2aefd96bcb5b7fd6" exitCode=0 Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.759976 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" event={"ID":"da56ae41-00cb-4345-a6be-1ceb542b8afe","Type":"ContainerDied","Data":"ce3b60ed6622a0491d64cab367020660c45750d6cfaaf6ad2aefd96bcb5b7fd6"} Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.762770 4895 generic.go:334] "Generic (PLEG): container finished" podID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" containerID="baff8f969c1e1def9f3cf43a969b8505763796e5fdf5cee4bebf8453048e0a79" exitCode=0 Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.764392 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhpc" event={"ID":"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81","Type":"ContainerDied","Data":"baff8f969c1e1def9f3cf43a969b8505763796e5fdf5cee4bebf8453048e0a79"} Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.779587 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" podStartSLOduration=131.77953089 podStartE2EDuration="2m11.77953089s" podCreationTimestamp="2026-01-29 16:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:28.779508789 +0000 UTC m=+152.582486063" watchObservedRunningTime="2026-01-29 16:14:28.77953089 +0000 UTC m=+152.582508154" Jan 29 16:14:28 crc kubenswrapper[4895]: I0129 16:14:28.823674 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.823613587 podStartE2EDuration="2.823613587s" podCreationTimestamp="2026-01-29 16:14:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:28.797730308 +0000 UTC m=+152.600707592" watchObservedRunningTime="2026-01-29 16:14:28.823613587 +0000 UTC m=+152.626590871" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.099047 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zdds6"] Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.101116 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdds6"] Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.101244 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.103452 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.135641 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p8lq7"] Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.148467 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:29 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:29 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:29 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.148553 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.237988 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-utilities\") pod \"redhat-operators-zdds6\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.238090 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-catalog-content\") pod \"redhat-operators-zdds6\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.238120 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv8c6\" (UniqueName: \"kubernetes.io/projected/adf371b3-ba58-4be0-a05c-b88d01ffc60d-kube-api-access-fv8c6\") pod \"redhat-operators-zdds6\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.345235 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-catalog-content\") pod \"redhat-operators-zdds6\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.345303 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv8c6\" (UniqueName: \"kubernetes.io/projected/adf371b3-ba58-4be0-a05c-b88d01ffc60d-kube-api-access-fv8c6\") pod \"redhat-operators-zdds6\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.345364 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-utilities\") pod \"redhat-operators-zdds6\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.352423 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-catalog-content\") pod \"redhat-operators-zdds6\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.352631 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-utilities\") pod \"redhat-operators-zdds6\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.408553 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv8c6\" (UniqueName: \"kubernetes.io/projected/adf371b3-ba58-4be0-a05c-b88d01ffc60d-kube-api-access-fv8c6\") pod \"redhat-operators-zdds6\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.420413 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.431709 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z9nrd"] Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.434165 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.442827 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9nrd"] Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.553259 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-utilities\") pod \"redhat-operators-z9nrd\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.553339 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtmch\" (UniqueName: \"kubernetes.io/projected/06277e29-59af-446a-81a3-b3b8b1b5ab0a-kube-api-access-rtmch\") pod \"redhat-operators-z9nrd\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.553489 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-catalog-content\") pod \"redhat-operators-z9nrd\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.655812 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-utilities\") pod \"redhat-operators-z9nrd\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.655889 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtmch\" (UniqueName: \"kubernetes.io/projected/06277e29-59af-446a-81a3-b3b8b1b5ab0a-kube-api-access-rtmch\") pod \"redhat-operators-z9nrd\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.655981 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-catalog-content\") pod \"redhat-operators-z9nrd\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.656728 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-utilities\") pod \"redhat-operators-z9nrd\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.656957 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-catalog-content\") pod \"redhat-operators-z9nrd\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.682681 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtmch\" (UniqueName: \"kubernetes.io/projected/06277e29-59af-446a-81a3-b3b8b1b5ab0a-kube-api-access-rtmch\") pod \"redhat-operators-z9nrd\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.760906 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.762172 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.768828 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.769116 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.769376 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.790701 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.803630 4895 generic.go:334] "Generic (PLEG): container finished" podID="408c9cd8-1d91-4a4b-9e57-748578b4704e" containerID="ee2c9cccd2626ba5b9215dd1da38d60ffc398eb5e6971b7ff86866a7d49fac65" exitCode=0 Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.805989 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8lq7" event={"ID":"408c9cd8-1d91-4a4b-9e57-748578b4704e","Type":"ContainerDied","Data":"ee2c9cccd2626ba5b9215dd1da38d60ffc398eb5e6971b7ff86866a7d49fac65"} Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.806069 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8lq7" event={"ID":"408c9cd8-1d91-4a4b-9e57-748578b4704e","Type":"ContainerStarted","Data":"67f07a8ad1b5922bf338a3f87b1a07e12c6823e0d95117bfd033eaf580bddfa2"} Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.827316 4895 generic.go:334] "Generic (PLEG): container finished" podID="8c405329-c382-44a2-8c9d-74976164f122" containerID="1db9aaf91c52d48aebc8de689190f4736601af9f04b5870a7a133e722a4b6137" exitCode=0 Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.827392 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m467m" event={"ID":"8c405329-c382-44a2-8c9d-74976164f122","Type":"ContainerDied","Data":"1db9aaf91c52d48aebc8de689190f4736601af9f04b5870a7a133e722a4b6137"} Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.827425 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m467m" event={"ID":"8c405329-c382-44a2-8c9d-74976164f122","Type":"ContainerStarted","Data":"ddd0101747a64270f1f63b0d77df7619ace8236cdeea96710fd6efe814103503"} Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.858113 4895 generic.go:334] "Generic (PLEG): container finished" podID="e6babd62-a7db-43b5-bb64-9a71f3b7d47f" containerID="d31e6e74f4ee155712cb0ea328b1a7bb57d660f75fd355b86e7c9459accf20a2" exitCode=0 Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.858216 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6babd62-a7db-43b5-bb64-9a71f3b7d47f","Type":"ContainerDied","Data":"d31e6e74f4ee155712cb0ea328b1a7bb57d660f75fd355b86e7c9459accf20a2"} Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.861613 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/315516e4-6108-4799-a01d-874651597c73-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"315516e4-6108-4799-a01d-874651597c73\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.861876 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/315516e4-6108-4799-a01d-874651597c73-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"315516e4-6108-4799-a01d-874651597c73\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.934741 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdds6"] Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.963238 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/315516e4-6108-4799-a01d-874651597c73-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"315516e4-6108-4799-a01d-874651597c73\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.963857 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/315516e4-6108-4799-a01d-874651597c73-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"315516e4-6108-4799-a01d-874651597c73\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.963948 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/315516e4-6108-4799-a01d-874651597c73-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"315516e4-6108-4799-a01d-874651597c73\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:14:29 crc kubenswrapper[4895]: W0129 16:14:29.974959 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadf371b3_ba58_4be0_a05c_b88d01ffc60d.slice/crio-f45d82106559d81e5e7bfff8000ec7dc8c4820220a6d11b4186e6842da8476b5 WatchSource:0}: Error finding container f45d82106559d81e5e7bfff8000ec7dc8c4820220a6d11b4186e6842da8476b5: Status 404 returned error can't find the container with id f45d82106559d81e5e7bfff8000ec7dc8c4820220a6d11b4186e6842da8476b5 Jan 29 16:14:29 crc kubenswrapper[4895]: I0129 16:14:29.982983 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/315516e4-6108-4799-a01d-874651597c73-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"315516e4-6108-4799-a01d-874651597c73\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.103810 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.143746 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:30 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:30 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:30 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.143821 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.387007 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9nrd"] Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.439484 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.585054 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da56ae41-00cb-4345-a6be-1ceb542b8afe-config-volume\") pod \"da56ae41-00cb-4345-a6be-1ceb542b8afe\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.585120 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da56ae41-00cb-4345-a6be-1ceb542b8afe-secret-volume\") pod \"da56ae41-00cb-4345-a6be-1ceb542b8afe\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.585273 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrdkd\" (UniqueName: \"kubernetes.io/projected/da56ae41-00cb-4345-a6be-1ceb542b8afe-kube-api-access-nrdkd\") pod \"da56ae41-00cb-4345-a6be-1ceb542b8afe\" (UID: \"da56ae41-00cb-4345-a6be-1ceb542b8afe\") " Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.595449 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da56ae41-00cb-4345-a6be-1ceb542b8afe-config-volume" (OuterVolumeSpecName: "config-volume") pod "da56ae41-00cb-4345-a6be-1ceb542b8afe" (UID: "da56ae41-00cb-4345-a6be-1ceb542b8afe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.600292 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da56ae41-00cb-4345-a6be-1ceb542b8afe-kube-api-access-nrdkd" (OuterVolumeSpecName: "kube-api-access-nrdkd") pod "da56ae41-00cb-4345-a6be-1ceb542b8afe" (UID: "da56ae41-00cb-4345-a6be-1ceb542b8afe"). InnerVolumeSpecName "kube-api-access-nrdkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.604457 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da56ae41-00cb-4345-a6be-1ceb542b8afe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da56ae41-00cb-4345-a6be-1ceb542b8afe" (UID: "da56ae41-00cb-4345-a6be-1ceb542b8afe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.649143 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:14:30 crc kubenswrapper[4895]: W0129 16:14:30.658425 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod315516e4_6108_4799_a01d_874651597c73.slice/crio-5b4bd3d892cbc95760cfcc72cdd59b6173935c2100fb46dc0a24dc90b1a30b65 WatchSource:0}: Error finding container 5b4bd3d892cbc95760cfcc72cdd59b6173935c2100fb46dc0a24dc90b1a30b65: Status 404 returned error can't find the container with id 5b4bd3d892cbc95760cfcc72cdd59b6173935c2100fb46dc0a24dc90b1a30b65 Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.687280 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da56ae41-00cb-4345-a6be-1ceb542b8afe-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.687322 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da56ae41-00cb-4345-a6be-1ceb542b8afe-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.687336 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrdkd\" (UniqueName: \"kubernetes.io/projected/da56ae41-00cb-4345-a6be-1ceb542b8afe-kube-api-access-nrdkd\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.907353 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"315516e4-6108-4799-a01d-874651597c73","Type":"ContainerStarted","Data":"5b4bd3d892cbc95760cfcc72cdd59b6173935c2100fb46dc0a24dc90b1a30b65"} Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.913018 4895 generic.go:334] "Generic (PLEG): container finished" podID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" containerID="4dba8f18612d6e0b2a5e6a854070b42fcf7aa28f4fed57e21a458dab3cd748da" exitCode=0 Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.913091 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9nrd" event={"ID":"06277e29-59af-446a-81a3-b3b8b1b5ab0a","Type":"ContainerDied","Data":"4dba8f18612d6e0b2a5e6a854070b42fcf7aa28f4fed57e21a458dab3cd748da"} Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.913122 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9nrd" event={"ID":"06277e29-59af-446a-81a3-b3b8b1b5ab0a","Type":"ContainerStarted","Data":"154e4c74e520a280f0710fb035e574040131fa56140bf9e8ccaf9f5fa70db0b1"} Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.927603 4895 generic.go:334] "Generic (PLEG): container finished" podID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" containerID="4d156a5f67fd0630f5f9db3e3f4c82a9c71a813c0c997131eab9cb585b44f237" exitCode=0 Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.927691 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdds6" event={"ID":"adf371b3-ba58-4be0-a05c-b88d01ffc60d","Type":"ContainerDied","Data":"4d156a5f67fd0630f5f9db3e3f4c82a9c71a813c0c997131eab9cb585b44f237"} Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.927732 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdds6" event={"ID":"adf371b3-ba58-4be0-a05c-b88d01ffc60d","Type":"ContainerStarted","Data":"f45d82106559d81e5e7bfff8000ec7dc8c4820220a6d11b4186e6842da8476b5"} Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.937095 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.939258 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g" event={"ID":"da56ae41-00cb-4345-a6be-1ceb542b8afe","Type":"ContainerDied","Data":"54548b3770dc6002344c0ca7e1072ffaa4b0fb8c7dc8d23997bb84124e8aa632"} Jan 29 16:14:30 crc kubenswrapper[4895]: I0129 16:14:30.939335 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54548b3770dc6002344c0ca7e1072ffaa4b0fb8c7dc8d23997bb84124e8aa632" Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.150425 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:31 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:31 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:31 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.150522 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.340658 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.499671 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kube-api-access\") pod \"e6babd62-a7db-43b5-bb64-9a71f3b7d47f\" (UID: \"e6babd62-a7db-43b5-bb64-9a71f3b7d47f\") " Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.499750 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kubelet-dir\") pod \"e6babd62-a7db-43b5-bb64-9a71f3b7d47f\" (UID: \"e6babd62-a7db-43b5-bb64-9a71f3b7d47f\") " Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.500206 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e6babd62-a7db-43b5-bb64-9a71f3b7d47f" (UID: "e6babd62-a7db-43b5-bb64-9a71f3b7d47f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.512555 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e6babd62-a7db-43b5-bb64-9a71f3b7d47f" (UID: "e6babd62-a7db-43b5-bb64-9a71f3b7d47f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.601780 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.601820 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6babd62-a7db-43b5-bb64-9a71f3b7d47f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.993279 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.993304 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e6babd62-a7db-43b5-bb64-9a71f3b7d47f","Type":"ContainerDied","Data":"a491b25127e981e099c761a79cb8f2e25c77c6251da3702cc16203d2f4e88340"} Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.993364 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a491b25127e981e099c761a79cb8f2e25c77c6251da3702cc16203d2f4e88340" Jan 29 16:14:31 crc kubenswrapper[4895]: I0129 16:14:31.996042 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"315516e4-6108-4799-a01d-874651597c73","Type":"ContainerStarted","Data":"2502b2633f65445de189e237998b4db1889ef1cab02ee95d6f9174b540935a05"} Jan 29 16:14:32 crc kubenswrapper[4895]: I0129 16:14:32.021473 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.02144341 podStartE2EDuration="3.02144341s" podCreationTimestamp="2026-01-29 16:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:14:32.020311284 +0000 UTC m=+155.823288558" watchObservedRunningTime="2026-01-29 16:14:32.02144341 +0000 UTC m=+155.824420684" Jan 29 16:14:32 crc kubenswrapper[4895]: I0129 16:14:32.140147 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:32 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:32 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:32 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:32 crc kubenswrapper[4895]: I0129 16:14:32.140255 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:32 crc kubenswrapper[4895]: I0129 16:14:32.155884 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:32 crc kubenswrapper[4895]: I0129 16:14:32.160405 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-6bcff" Jan 29 16:14:32 crc kubenswrapper[4895]: I0129 16:14:32.998932 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wb4kv" Jan 29 16:14:33 crc kubenswrapper[4895]: I0129 16:14:33.111470 4895 generic.go:334] "Generic (PLEG): container finished" podID="315516e4-6108-4799-a01d-874651597c73" containerID="2502b2633f65445de189e237998b4db1889ef1cab02ee95d6f9174b540935a05" exitCode=0 Jan 29 16:14:33 crc kubenswrapper[4895]: I0129 16:14:33.113749 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"315516e4-6108-4799-a01d-874651597c73","Type":"ContainerDied","Data":"2502b2633f65445de189e237998b4db1889ef1cab02ee95d6f9174b540935a05"} Jan 29 16:14:33 crc kubenswrapper[4895]: I0129 16:14:33.149277 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:33 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:33 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:33 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:33 crc kubenswrapper[4895]: I0129 16:14:33.149353 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:34 crc kubenswrapper[4895]: I0129 16:14:34.138216 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:34 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:34 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:34 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:34 crc kubenswrapper[4895]: I0129 16:14:34.138304 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:34 crc kubenswrapper[4895]: I0129 16:14:34.625012 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:14:34 crc kubenswrapper[4895]: I0129 16:14:34.800075 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/315516e4-6108-4799-a01d-874651597c73-kube-api-access\") pod \"315516e4-6108-4799-a01d-874651597c73\" (UID: \"315516e4-6108-4799-a01d-874651597c73\") " Jan 29 16:14:34 crc kubenswrapper[4895]: I0129 16:14:34.800266 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/315516e4-6108-4799-a01d-874651597c73-kubelet-dir\") pod \"315516e4-6108-4799-a01d-874651597c73\" (UID: \"315516e4-6108-4799-a01d-874651597c73\") " Jan 29 16:14:34 crc kubenswrapper[4895]: I0129 16:14:34.800711 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/315516e4-6108-4799-a01d-874651597c73-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "315516e4-6108-4799-a01d-874651597c73" (UID: "315516e4-6108-4799-a01d-874651597c73"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:14:34 crc kubenswrapper[4895]: I0129 16:14:34.816658 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/315516e4-6108-4799-a01d-874651597c73-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "315516e4-6108-4799-a01d-874651597c73" (UID: "315516e4-6108-4799-a01d-874651597c73"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:14:34 crc kubenswrapper[4895]: I0129 16:14:34.903369 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/315516e4-6108-4799-a01d-874651597c73-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:34 crc kubenswrapper[4895]: I0129 16:14:34.903419 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/315516e4-6108-4799-a01d-874651597c73-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:35 crc kubenswrapper[4895]: I0129 16:14:35.148895 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:35 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:35 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:35 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:35 crc kubenswrapper[4895]: I0129 16:14:35.149061 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:35 crc kubenswrapper[4895]: I0129 16:14:35.166242 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"315516e4-6108-4799-a01d-874651597c73","Type":"ContainerDied","Data":"5b4bd3d892cbc95760cfcc72cdd59b6173935c2100fb46dc0a24dc90b1a30b65"} Jan 29 16:14:35 crc kubenswrapper[4895]: I0129 16:14:35.166318 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b4bd3d892cbc95760cfcc72cdd59b6173935c2100fb46dc0a24dc90b1a30b65" Jan 29 16:14:35 crc kubenswrapper[4895]: I0129 16:14:35.166528 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:14:36 crc kubenswrapper[4895]: I0129 16:14:36.144654 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:36 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:36 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:36 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:36 crc kubenswrapper[4895]: I0129 16:14:36.144742 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:37 crc kubenswrapper[4895]: I0129 16:14:37.137500 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:37 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:37 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:37 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:37 crc kubenswrapper[4895]: I0129 16:14:37.137631 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:37 crc kubenswrapper[4895]: I0129 16:14:37.141661 4895 patch_prober.go:28] interesting pod/console-f9d7485db-p6ck2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 29 16:14:37 crc kubenswrapper[4895]: I0129 16:14:37.141729 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-p6ck2" podUID="6dd34441-4294-4e90-9f2d-909c5aecdff7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 29 16:14:37 crc kubenswrapper[4895]: I0129 16:14:37.322509 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:14:37 crc kubenswrapper[4895]: I0129 16:14:37.322575 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:14:37 crc kubenswrapper[4895]: I0129 16:14:37.322655 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:14:37 crc kubenswrapper[4895]: I0129 16:14:37.322738 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:14:38 crc kubenswrapper[4895]: I0129 16:14:38.138428 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:38 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:38 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:38 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:38 crc kubenswrapper[4895]: I0129 16:14:38.138905 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:39 crc kubenswrapper[4895]: I0129 16:14:39.138924 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:39 crc kubenswrapper[4895]: [-]has-synced failed: reason withheld Jan 29 16:14:39 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:39 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:39 crc kubenswrapper[4895]: I0129 16:14:39.139270 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:39 crc kubenswrapper[4895]: I0129 16:14:39.426402 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:14:39 crc kubenswrapper[4895]: I0129 16:14:39.456230 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5113e2b8-dc97-42a1-aa1c-3d604cada8c2-metrics-certs\") pod \"network-metrics-daemon-h9mkw\" (UID: \"5113e2b8-dc97-42a1-aa1c-3d604cada8c2\") " pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:14:39 crc kubenswrapper[4895]: I0129 16:14:39.671818 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-h9mkw" Jan 29 16:14:40 crc kubenswrapper[4895]: I0129 16:14:40.138311 4895 patch_prober.go:28] interesting pod/router-default-5444994796-s5x8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:14:40 crc kubenswrapper[4895]: [+]has-synced ok Jan 29 16:14:40 crc kubenswrapper[4895]: [+]process-running ok Jan 29 16:14:40 crc kubenswrapper[4895]: healthz check failed Jan 29 16:14:40 crc kubenswrapper[4895]: I0129 16:14:40.138414 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5x8l" podUID="9c39ec96-c5ce-40f8-80b5-68baacd59516" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:14:41 crc kubenswrapper[4895]: I0129 16:14:41.140610 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:41 crc kubenswrapper[4895]: I0129 16:14:41.152688 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-s5x8l" Jan 29 16:14:46 crc kubenswrapper[4895]: I0129 16:14:46.791609 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:14:47 crc kubenswrapper[4895]: I0129 16:14:47.146027 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:47 crc kubenswrapper[4895]: I0129 16:14:47.151768 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:14:47 crc kubenswrapper[4895]: I0129 16:14:47.321756 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:14:47 crc kubenswrapper[4895]: I0129 16:14:47.321860 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:14:47 crc kubenswrapper[4895]: I0129 16:14:47.321889 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:14:47 crc kubenswrapper[4895]: I0129 16:14:47.321947 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-t675c" Jan 29 16:14:47 crc kubenswrapper[4895]: I0129 16:14:47.321963 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:14:47 crc kubenswrapper[4895]: I0129 16:14:47.322631 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:14:47 crc kubenswrapper[4895]: I0129 16:14:47.322701 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:14:47 crc kubenswrapper[4895]: I0129 16:14:47.322763 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"488c571e2b7fb5311a39b80227aee282e675ca503fce7eacdf94255644894c31"} pod="openshift-console/downloads-7954f5f757-t675c" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 16:14:47 crc kubenswrapper[4895]: I0129 16:14:47.322962 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" containerID="cri-o://488c571e2b7fb5311a39b80227aee282e675ca503fce7eacdf94255644894c31" gracePeriod=2 Jan 29 16:14:48 crc kubenswrapper[4895]: I0129 16:14:48.406117 4895 generic.go:334] "Generic (PLEG): container finished" podID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerID="488c571e2b7fb5311a39b80227aee282e675ca503fce7eacdf94255644894c31" exitCode=0 Jan 29 16:14:48 crc kubenswrapper[4895]: I0129 16:14:48.406179 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t675c" event={"ID":"9dda0f03-a7e0-442d-b684-9b6b5a1885ab","Type":"ContainerDied","Data":"488c571e2b7fb5311a39b80227aee282e675ca503fce7eacdf94255644894c31"} Jan 29 16:14:57 crc kubenswrapper[4895]: I0129 16:14:57.320207 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:14:57 crc kubenswrapper[4895]: I0129 16:14:57.321028 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:14:57 crc kubenswrapper[4895]: I0129 16:14:57.823071 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:14:57 crc kubenswrapper[4895]: I0129 16:14:57.823164 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:14:57 crc kubenswrapper[4895]: I0129 16:14:57.919503 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c7csh" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.139929 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l"] Jan 29 16:15:00 crc kubenswrapper[4895]: E0129 16:15:00.140680 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315516e4-6108-4799-a01d-874651597c73" containerName="pruner" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.140699 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="315516e4-6108-4799-a01d-874651597c73" containerName="pruner" Jan 29 16:15:00 crc kubenswrapper[4895]: E0129 16:15:00.140725 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da56ae41-00cb-4345-a6be-1ceb542b8afe" containerName="collect-profiles" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.140733 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="da56ae41-00cb-4345-a6be-1ceb542b8afe" containerName="collect-profiles" Jan 29 16:15:00 crc kubenswrapper[4895]: E0129 16:15:00.140747 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6babd62-a7db-43b5-bb64-9a71f3b7d47f" containerName="pruner" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.140755 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6babd62-a7db-43b5-bb64-9a71f3b7d47f" containerName="pruner" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.140900 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6babd62-a7db-43b5-bb64-9a71f3b7d47f" containerName="pruner" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.140921 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="315516e4-6108-4799-a01d-874651597c73" containerName="pruner" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.140934 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="da56ae41-00cb-4345-a6be-1ceb542b8afe" containerName="collect-profiles" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.141523 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.148372 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l"] Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.152140 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.152480 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.276275 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c306c8c0-a6f0-4811-9688-b811e9495c76-config-volume\") pod \"collect-profiles-29495055-fhr8l\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.276368 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p4jv\" (UniqueName: \"kubernetes.io/projected/c306c8c0-a6f0-4811-9688-b811e9495c76-kube-api-access-8p4jv\") pod \"collect-profiles-29495055-fhr8l\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.276420 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c306c8c0-a6f0-4811-9688-b811e9495c76-secret-volume\") pod \"collect-profiles-29495055-fhr8l\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.378221 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c306c8c0-a6f0-4811-9688-b811e9495c76-config-volume\") pod \"collect-profiles-29495055-fhr8l\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.378314 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p4jv\" (UniqueName: \"kubernetes.io/projected/c306c8c0-a6f0-4811-9688-b811e9495c76-kube-api-access-8p4jv\") pod \"collect-profiles-29495055-fhr8l\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.378398 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c306c8c0-a6f0-4811-9688-b811e9495c76-secret-volume\") pod \"collect-profiles-29495055-fhr8l\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.379435 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c306c8c0-a6f0-4811-9688-b811e9495c76-config-volume\") pod \"collect-profiles-29495055-fhr8l\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.388507 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c306c8c0-a6f0-4811-9688-b811e9495c76-secret-volume\") pod \"collect-profiles-29495055-fhr8l\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.398169 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p4jv\" (UniqueName: \"kubernetes.io/projected/c306c8c0-a6f0-4811-9688-b811e9495c76-kube-api-access-8p4jv\") pod \"collect-profiles-29495055-fhr8l\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:00 crc kubenswrapper[4895]: I0129 16:15:00.479279 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:05 crc kubenswrapper[4895]: I0129 16:15:05.267652 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:15:05 crc kubenswrapper[4895]: I0129 16:15:05.752075 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:15:05 crc kubenswrapper[4895]: I0129 16:15:05.753119 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:15:05 crc kubenswrapper[4895]: I0129 16:15:05.757084 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:15:05 crc kubenswrapper[4895]: I0129 16:15:05.758394 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 16:15:05 crc kubenswrapper[4895]: I0129 16:15:05.758482 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 16:15:05 crc kubenswrapper[4895]: I0129 16:15:05.886488 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ce8828ec-75ea-44d6-8d7b-d20a654fb23b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:15:05 crc kubenswrapper[4895]: I0129 16:15:05.887022 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ce8828ec-75ea-44d6-8d7b-d20a654fb23b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:15:05 crc kubenswrapper[4895]: I0129 16:15:05.987927 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ce8828ec-75ea-44d6-8d7b-d20a654fb23b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:15:05 crc kubenswrapper[4895]: I0129 16:15:05.988024 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ce8828ec-75ea-44d6-8d7b-d20a654fb23b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:15:05 crc kubenswrapper[4895]: I0129 16:15:05.988030 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ce8828ec-75ea-44d6-8d7b-d20a654fb23b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:15:06 crc kubenswrapper[4895]: I0129 16:15:06.014713 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ce8828ec-75ea-44d6-8d7b-d20a654fb23b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:15:06 crc kubenswrapper[4895]: I0129 16:15:06.077705 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:15:06 crc kubenswrapper[4895]: E0129 16:15:06.252316 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:15:06 crc kubenswrapper[4895]: E0129 16:15:06.253155 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnvlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-t8mm9_openshift-marketplace(289a9c3e-eaaf-434d-87d3-24097a01057e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:15:06 crc kubenswrapper[4895]: E0129 16:15:06.254594 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-t8mm9" podUID="289a9c3e-eaaf-434d-87d3-24097a01057e" Jan 29 16:15:07 crc kubenswrapper[4895]: I0129 16:15:07.319581 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:15:07 crc kubenswrapper[4895]: I0129 16:15:07.319657 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:15:10 crc kubenswrapper[4895]: I0129 16:15:10.387658 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pmkl9"] Jan 29 16:15:10 crc kubenswrapper[4895]: I0129 16:15:10.947125 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:15:10 crc kubenswrapper[4895]: I0129 16:15:10.948736 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:10 crc kubenswrapper[4895]: I0129 16:15:10.956725 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:15:11 crc kubenswrapper[4895]: I0129 16:15:11.073353 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18cb8033-bd36-4b53-8f71-7b2d8d527270-kube-api-access\") pod \"installer-9-crc\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:11 crc kubenswrapper[4895]: I0129 16:15:11.073467 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-var-lock\") pod \"installer-9-crc\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:11 crc kubenswrapper[4895]: I0129 16:15:11.073519 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-kubelet-dir\") pod \"installer-9-crc\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:11 crc kubenswrapper[4895]: I0129 16:15:11.175311 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-var-lock\") pod \"installer-9-crc\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:11 crc kubenswrapper[4895]: I0129 16:15:11.175395 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-kubelet-dir\") pod \"installer-9-crc\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:11 crc kubenswrapper[4895]: I0129 16:15:11.175443 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18cb8033-bd36-4b53-8f71-7b2d8d527270-kube-api-access\") pod \"installer-9-crc\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:11 crc kubenswrapper[4895]: I0129 16:15:11.175896 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-var-lock\") pod \"installer-9-crc\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:11 crc kubenswrapper[4895]: I0129 16:15:11.175941 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-kubelet-dir\") pod \"installer-9-crc\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:11 crc kubenswrapper[4895]: I0129 16:15:11.200496 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18cb8033-bd36-4b53-8f71-7b2d8d527270-kube-api-access\") pod \"installer-9-crc\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:11 crc kubenswrapper[4895]: I0129 16:15:11.291378 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:11 crc kubenswrapper[4895]: E0129 16:15:11.479352 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-t8mm9" podUID="289a9c3e-eaaf-434d-87d3-24097a01057e" Jan 29 16:15:11 crc kubenswrapper[4895]: E0129 16:15:11.553881 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:15:11 crc kubenswrapper[4895]: E0129 16:15:11.554102 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtmch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-z9nrd_openshift-marketplace(06277e29-59af-446a-81a3-b3b8b1b5ab0a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:15:11 crc kubenswrapper[4895]: E0129 16:15:11.555455 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-z9nrd" podUID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" Jan 29 16:15:13 crc kubenswrapper[4895]: E0129 16:15:13.085675 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-z9nrd" podUID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" Jan 29 16:15:13 crc kubenswrapper[4895]: E0129 16:15:13.170247 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:15:13 crc kubenswrapper[4895]: E0129 16:15:13.170520 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9lrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4tj74_openshift-marketplace(044df5fd-0d96-4aab-b09e-24870d0e4bd9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:15:13 crc kubenswrapper[4895]: E0129 16:15:13.171909 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4tj74" podUID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" Jan 29 16:15:13 crc kubenswrapper[4895]: E0129 16:15:13.192514 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:15:13 crc kubenswrapper[4895]: E0129 16:15:13.193099 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv8c6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zdds6_openshift-marketplace(adf371b3-ba58-4be0-a05c-b88d01ffc60d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:15:13 crc kubenswrapper[4895]: E0129 16:15:13.194336 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zdds6" podUID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.337338 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zdds6" podUID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.337444 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4tj74" podUID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.415389 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.415830 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8l2zq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m467m_openshift-marketplace(8c405329-c382-44a2-8c9d-74976164f122): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.417115 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-m467m" podUID="8c405329-c382-44a2-8c9d-74976164f122" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.418924 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.419069 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5ts9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4slsz_openshift-marketplace(f5ead4e3-bcde-4c47-8173-9f7773f0a45f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.420750 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4slsz" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.456554 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.456786 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kkxs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p8lq7_openshift-marketplace(408c9cd8-1d91-4a4b-9e57-748578b4704e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.458029 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-p8lq7" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.515008 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.515224 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7v95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7nhpc_openshift-marketplace(45b5c07a-bde1-4ffb-bf31-9c7c296d7f81): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.516744 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7nhpc" podUID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" Jan 29 16:15:14 crc kubenswrapper[4895]: I0129 16:15:14.583967 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t675c" event={"ID":"9dda0f03-a7e0-442d-b684-9b6b5a1885ab","Type":"ContainerStarted","Data":"30485a283535469d3d744a5a6a259864ee685140f7855da4153d6eac3a3891d7"} Jan 29 16:15:14 crc kubenswrapper[4895]: I0129 16:15:14.584019 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t675c" Jan 29 16:15:14 crc kubenswrapper[4895]: I0129 16:15:14.584795 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:15:14 crc kubenswrapper[4895]: I0129 16:15:14.584825 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.589341 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m467m" podUID="8c405329-c382-44a2-8c9d-74976164f122" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.589687 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7nhpc" podUID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.589724 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p8lq7" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" Jan 29 16:15:14 crc kubenswrapper[4895]: E0129 16:15:14.589759 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4slsz" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" Jan 29 16:15:14 crc kubenswrapper[4895]: I0129 16:15:14.859571 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-h9mkw"] Jan 29 16:15:14 crc kubenswrapper[4895]: W0129 16:15:14.868752 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5113e2b8_dc97_42a1_aa1c_3d604cada8c2.slice/crio-81e6eff3cca6195748cd0cc7ce9c97d2c0cbacff30c0052ae63cbfcb20d1e361 WatchSource:0}: Error finding container 81e6eff3cca6195748cd0cc7ce9c97d2c0cbacff30c0052ae63cbfcb20d1e361: Status 404 returned error can't find the container with id 81e6eff3cca6195748cd0cc7ce9c97d2c0cbacff30c0052ae63cbfcb20d1e361 Jan 29 16:15:14 crc kubenswrapper[4895]: I0129 16:15:14.957292 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:15:15 crc kubenswrapper[4895]: I0129 16:15:15.002043 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:15:15 crc kubenswrapper[4895]: I0129 16:15:15.010107 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l"] Jan 29 16:15:15 crc kubenswrapper[4895]: W0129 16:15:15.016244 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc306c8c0_a6f0_4811_9688_b811e9495c76.slice/crio-5c7da5c7e295bf639656753769cef67f4e592dafb660ce22d96c1745e021e412 WatchSource:0}: Error finding container 5c7da5c7e295bf639656753769cef67f4e592dafb660ce22d96c1745e021e412: Status 404 returned error can't find the container with id 5c7da5c7e295bf639656753769cef67f4e592dafb660ce22d96c1745e021e412 Jan 29 16:15:15 crc kubenswrapper[4895]: I0129 16:15:15.588730 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"18cb8033-bd36-4b53-8f71-7b2d8d527270","Type":"ContainerStarted","Data":"5c914e932b21987209df6f2cd235f85edec0c1daa8731a9e521c079947585921"} Jan 29 16:15:15 crc kubenswrapper[4895]: I0129 16:15:15.591551 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" event={"ID":"c306c8c0-a6f0-4811-9688-b811e9495c76","Type":"ContainerStarted","Data":"5c7da5c7e295bf639656753769cef67f4e592dafb660ce22d96c1745e021e412"} Jan 29 16:15:15 crc kubenswrapper[4895]: I0129 16:15:15.592957 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" event={"ID":"5113e2b8-dc97-42a1-aa1c-3d604cada8c2","Type":"ContainerStarted","Data":"81e6eff3cca6195748cd0cc7ce9c97d2c0cbacff30c0052ae63cbfcb20d1e361"} Jan 29 16:15:15 crc kubenswrapper[4895]: I0129 16:15:15.594082 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ce8828ec-75ea-44d6-8d7b-d20a654fb23b","Type":"ContainerStarted","Data":"ea45d7b5792e17ce3344f9bc24a5d7953a7b120a27a22fd05ca338e1e2992dc6"} Jan 29 16:15:15 crc kubenswrapper[4895]: I0129 16:15:15.595301 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:15:15 crc kubenswrapper[4895]: I0129 16:15:15.595369 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:15:16 crc kubenswrapper[4895]: I0129 16:15:16.602285 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" event={"ID":"5113e2b8-dc97-42a1-aa1c-3d604cada8c2","Type":"ContainerStarted","Data":"4906161e7332f9f80b731f68b58d1987d4d2ef3c7090dd9466004133ad05e7ab"} Jan 29 16:15:16 crc kubenswrapper[4895]: I0129 16:15:16.602680 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-h9mkw" event={"ID":"5113e2b8-dc97-42a1-aa1c-3d604cada8c2","Type":"ContainerStarted","Data":"fad791e5066ede0c25e99a2e0694a0d06c2321127c23f2eb8f28b6c4b166a688"} Jan 29 16:15:16 crc kubenswrapper[4895]: I0129 16:15:16.603714 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ce8828ec-75ea-44d6-8d7b-d20a654fb23b","Type":"ContainerStarted","Data":"50739e2f65f5d8048c1936eb6514ed7e0d19ec8f229c63fb80cec09d2229b560"} Jan 29 16:15:16 crc kubenswrapper[4895]: I0129 16:15:16.605800 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"18cb8033-bd36-4b53-8f71-7b2d8d527270","Type":"ContainerStarted","Data":"cfb9b7dfdf6d7eac717788f435405dc18bb16a400a7562c6af59ad05a13dbac9"} Jan 29 16:15:16 crc kubenswrapper[4895]: I0129 16:15:16.607505 4895 generic.go:334] "Generic (PLEG): container finished" podID="c306c8c0-a6f0-4811-9688-b811e9495c76" containerID="2f0c9de26cd5d9d5b3cc67bebf2e60370b9376d3d4307b54c46ff8e42fd415cd" exitCode=0 Jan 29 16:15:16 crc kubenswrapper[4895]: I0129 16:15:16.607544 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" event={"ID":"c306c8c0-a6f0-4811-9688-b811e9495c76","Type":"ContainerDied","Data":"2f0c9de26cd5d9d5b3cc67bebf2e60370b9376d3d4307b54c46ff8e42fd415cd"} Jan 29 16:15:16 crc kubenswrapper[4895]: I0129 16:15:16.648703 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=11.648679654 podStartE2EDuration="11.648679654s" podCreationTimestamp="2026-01-29 16:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:15:16.630030488 +0000 UTC m=+200.433007762" watchObservedRunningTime="2026-01-29 16:15:16.648679654 +0000 UTC m=+200.451656928" Jan 29 16:15:17 crc kubenswrapper[4895]: I0129 16:15:17.054313 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=7.054291868 podStartE2EDuration="7.054291868s" podCreationTimestamp="2026-01-29 16:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:15:16.663383332 +0000 UTC m=+200.466360596" watchObservedRunningTime="2026-01-29 16:15:17.054291868 +0000 UTC m=+200.857269122" Jan 29 16:15:17 crc kubenswrapper[4895]: I0129 16:15:17.319917 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:15:17 crc kubenswrapper[4895]: I0129 16:15:17.319988 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:15:17 crc kubenswrapper[4895]: I0129 16:15:17.320528 4895 patch_prober.go:28] interesting pod/downloads-7954f5f757-t675c container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:15:17 crc kubenswrapper[4895]: I0129 16:15:17.320553 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t675c" podUID="9dda0f03-a7e0-442d-b684-9b6b5a1885ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:15:17 crc kubenswrapper[4895]: I0129 16:15:17.614431 4895 generic.go:334] "Generic (PLEG): container finished" podID="ce8828ec-75ea-44d6-8d7b-d20a654fb23b" containerID="50739e2f65f5d8048c1936eb6514ed7e0d19ec8f229c63fb80cec09d2229b560" exitCode=0 Jan 29 16:15:17 crc kubenswrapper[4895]: I0129 16:15:17.614684 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ce8828ec-75ea-44d6-8d7b-d20a654fb23b","Type":"ContainerDied","Data":"50739e2f65f5d8048c1936eb6514ed7e0d19ec8f229c63fb80cec09d2229b560"} Jan 29 16:15:17 crc kubenswrapper[4895]: I0129 16:15:17.631856 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-h9mkw" podStartSLOduration=181.631833963 podStartE2EDuration="3m1.631833963s" podCreationTimestamp="2026-01-29 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:15:17.627829593 +0000 UTC m=+201.430806857" watchObservedRunningTime="2026-01-29 16:15:17.631833963 +0000 UTC m=+201.434811247" Jan 29 16:15:17 crc kubenswrapper[4895]: I0129 16:15:17.908730 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.100899 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p4jv\" (UniqueName: \"kubernetes.io/projected/c306c8c0-a6f0-4811-9688-b811e9495c76-kube-api-access-8p4jv\") pod \"c306c8c0-a6f0-4811-9688-b811e9495c76\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.101218 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c306c8c0-a6f0-4811-9688-b811e9495c76-secret-volume\") pod \"c306c8c0-a6f0-4811-9688-b811e9495c76\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.101293 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c306c8c0-a6f0-4811-9688-b811e9495c76-config-volume\") pod \"c306c8c0-a6f0-4811-9688-b811e9495c76\" (UID: \"c306c8c0-a6f0-4811-9688-b811e9495c76\") " Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.102092 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c306c8c0-a6f0-4811-9688-b811e9495c76-config-volume" (OuterVolumeSpecName: "config-volume") pod "c306c8c0-a6f0-4811-9688-b811e9495c76" (UID: "c306c8c0-a6f0-4811-9688-b811e9495c76"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.107120 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c306c8c0-a6f0-4811-9688-b811e9495c76-kube-api-access-8p4jv" (OuterVolumeSpecName: "kube-api-access-8p4jv") pod "c306c8c0-a6f0-4811-9688-b811e9495c76" (UID: "c306c8c0-a6f0-4811-9688-b811e9495c76"). InnerVolumeSpecName "kube-api-access-8p4jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.107585 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c306c8c0-a6f0-4811-9688-b811e9495c76-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c306c8c0-a6f0-4811-9688-b811e9495c76" (UID: "c306c8c0-a6f0-4811-9688-b811e9495c76"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.202570 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p4jv\" (UniqueName: \"kubernetes.io/projected/c306c8c0-a6f0-4811-9688-b811e9495c76-kube-api-access-8p4jv\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.202610 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c306c8c0-a6f0-4811-9688-b811e9495c76-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.202622 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c306c8c0-a6f0-4811-9688-b811e9495c76-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.623892 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.626283 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l" event={"ID":"c306c8c0-a6f0-4811-9688-b811e9495c76","Type":"ContainerDied","Data":"5c7da5c7e295bf639656753769cef67f4e592dafb660ce22d96c1745e021e412"} Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.626358 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c7da5c7e295bf639656753769cef67f4e592dafb660ce22d96c1745e021e412" Jan 29 16:15:18 crc kubenswrapper[4895]: I0129 16:15:18.922354 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:15:19 crc kubenswrapper[4895]: I0129 16:15:19.115275 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kubelet-dir\") pod \"ce8828ec-75ea-44d6-8d7b-d20a654fb23b\" (UID: \"ce8828ec-75ea-44d6-8d7b-d20a654fb23b\") " Jan 29 16:15:19 crc kubenswrapper[4895]: I0129 16:15:19.115385 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kube-api-access\") pod \"ce8828ec-75ea-44d6-8d7b-d20a654fb23b\" (UID: \"ce8828ec-75ea-44d6-8d7b-d20a654fb23b\") " Jan 29 16:15:19 crc kubenswrapper[4895]: I0129 16:15:19.115596 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ce8828ec-75ea-44d6-8d7b-d20a654fb23b" (UID: "ce8828ec-75ea-44d6-8d7b-d20a654fb23b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:19 crc kubenswrapper[4895]: I0129 16:15:19.132155 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ce8828ec-75ea-44d6-8d7b-d20a654fb23b" (UID: "ce8828ec-75ea-44d6-8d7b-d20a654fb23b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:19 crc kubenswrapper[4895]: I0129 16:15:19.216610 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:19 crc kubenswrapper[4895]: I0129 16:15:19.216662 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce8828ec-75ea-44d6-8d7b-d20a654fb23b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:19 crc kubenswrapper[4895]: I0129 16:15:19.630620 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ce8828ec-75ea-44d6-8d7b-d20a654fb23b","Type":"ContainerDied","Data":"ea45d7b5792e17ce3344f9bc24a5d7953a7b120a27a22fd05ca338e1e2992dc6"} Jan 29 16:15:19 crc kubenswrapper[4895]: I0129 16:15:19.630996 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea45d7b5792e17ce3344f9bc24a5d7953a7b120a27a22fd05ca338e1e2992dc6" Jan 29 16:15:19 crc kubenswrapper[4895]: I0129 16:15:19.630691 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:15:27 crc kubenswrapper[4895]: I0129 16:15:27.334639 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-t675c" Jan 29 16:15:27 crc kubenswrapper[4895]: I0129 16:15:27.823019 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:15:27 crc kubenswrapper[4895]: I0129 16:15:27.823455 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:15:27 crc kubenswrapper[4895]: I0129 16:15:27.823517 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:15:27 crc kubenswrapper[4895]: I0129 16:15:27.824456 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:15:27 crc kubenswrapper[4895]: I0129 16:15:27.824526 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b" gracePeriod=600 Jan 29 16:15:28 crc kubenswrapper[4895]: I0129 16:15:28.688174 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b" exitCode=0 Jan 29 16:15:28 crc kubenswrapper[4895]: I0129 16:15:28.688241 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b"} Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.431458 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" podUID="ae39172d-1e70-4e7c-8222-0cdaf8e645d6" containerName="oauth-openshift" containerID="cri-o://1552840d0d3bfc3dd9662620943b4dcfb084eb0fdcd40e97135d818a362a0278" gracePeriod=15 Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.737933 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae39172d-1e70-4e7c-8222-0cdaf8e645d6" containerID="1552840d0d3bfc3dd9662620943b4dcfb084eb0fdcd40e97135d818a362a0278" exitCode=0 Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.738460 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" event={"ID":"ae39172d-1e70-4e7c-8222-0cdaf8e645d6","Type":"ContainerDied","Data":"1552840d0d3bfc3dd9662620943b4dcfb084eb0fdcd40e97135d818a362a0278"} Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.741689 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"56067da70a2eed3187e4fd8c7753eb01f974e7c0d15e185a67c2e03037c7bf90"} Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.815502 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.859964 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd"] Jan 29 16:15:35 crc kubenswrapper[4895]: E0129 16:15:35.860364 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae39172d-1e70-4e7c-8222-0cdaf8e645d6" containerName="oauth-openshift" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.860390 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae39172d-1e70-4e7c-8222-0cdaf8e645d6" containerName="oauth-openshift" Jan 29 16:15:35 crc kubenswrapper[4895]: E0129 16:15:35.860409 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c306c8c0-a6f0-4811-9688-b811e9495c76" containerName="collect-profiles" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.860419 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c306c8c0-a6f0-4811-9688-b811e9495c76" containerName="collect-profiles" Jan 29 16:15:35 crc kubenswrapper[4895]: E0129 16:15:35.860428 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8828ec-75ea-44d6-8d7b-d20a654fb23b" containerName="pruner" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.860438 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8828ec-75ea-44d6-8d7b-d20a654fb23b" containerName="pruner" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.860579 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8828ec-75ea-44d6-8d7b-d20a654fb23b" containerName="pruner" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.860598 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c306c8c0-a6f0-4811-9688-b811e9495c76" containerName="collect-profiles" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.860612 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae39172d-1e70-4e7c-8222-0cdaf8e645d6" containerName="oauth-openshift" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.861253 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.878725 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd"] Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930394 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-serving-cert\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930464 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-trusted-ca-bundle\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930512 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-policies\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930545 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-service-ca\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930569 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-dir\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930613 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-router-certs\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930652 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-session\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930680 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct4tp\" (UniqueName: \"kubernetes.io/projected/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-kube-api-access-ct4tp\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930710 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-login\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930785 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-idp-0-file-data\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930835 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-provider-selection\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930883 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-cliconfig\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930915 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-ocp-branding-template\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.930999 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-error\") pod \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\" (UID: \"ae39172d-1e70-4e7c-8222-0cdaf8e645d6\") " Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.931519 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.931580 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg7mj\" (UniqueName: \"kubernetes.io/projected/48d1849a-21af-4e5a-aa40-63755cb2096c-kube-api-access-xg7mj\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.931610 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-session\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.931630 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.931660 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.931680 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.931732 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-audit-policies\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.931765 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.931789 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48d1849a-21af-4e5a-aa40-63755cb2096c-audit-dir\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.931818 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.931852 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.932017 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.932034 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.932062 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.937772 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.938397 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.938648 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.942148 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.954554 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.965527 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.972610 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.982137 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.982290 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-kube-api-access-ct4tp" (OuterVolumeSpecName: "kube-api-access-ct4tp") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "kube-api-access-ct4tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:35 crc kubenswrapper[4895]: I0129 16:15:35.988269 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.003818 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.012307 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.025045 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.031636 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ae39172d-1e70-4e7c-8222-0cdaf8e645d6" (UID: "ae39172d-1e70-4e7c-8222-0cdaf8e645d6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.032812 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.032876 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.032913 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-audit-policies\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.032935 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.032957 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48d1849a-21af-4e5a-aa40-63755cb2096c-audit-dir\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.032977 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033000 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033020 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033036 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033058 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033089 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033112 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg7mj\" (UniqueName: \"kubernetes.io/projected/48d1849a-21af-4e5a-aa40-63755cb2096c-kube-api-access-xg7mj\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033134 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-session\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033152 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033197 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033210 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033220 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033205 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48d1849a-21af-4e5a-aa40-63755cb2096c-audit-dir\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033231 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033335 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033355 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033373 4895 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033389 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033405 4895 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033420 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033438 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct4tp\" (UniqueName: \"kubernetes.io/projected/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-kube-api-access-ct4tp\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033451 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033466 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.033482 4895 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae39172d-1e70-4e7c-8222-0cdaf8e645d6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.034317 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.035396 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-audit-policies\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.036515 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.040223 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.040547 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.044138 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-session\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.049313 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.050663 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.051830 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.054922 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg7mj\" (UniqueName: \"kubernetes.io/projected/48d1849a-21af-4e5a-aa40-63755cb2096c-kube-api-access-xg7mj\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.055288 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.055535 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.056563 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48d1849a-21af-4e5a-aa40-63755cb2096c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7c67b6b-5gtxd\" (UID: \"48d1849a-21af-4e5a-aa40-63755cb2096c\") " pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.191152 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.704267 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd"] Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.772650 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4slsz" event={"ID":"f5ead4e3-bcde-4c47-8173-9f7773f0a45f","Type":"ContainerStarted","Data":"5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a"} Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.784409 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9nrd" event={"ID":"06277e29-59af-446a-81a3-b3b8b1b5ab0a","Type":"ContainerStarted","Data":"9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7"} Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.815840 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tj74" event={"ID":"044df5fd-0d96-4aab-b09e-24870d0e4bd9","Type":"ContainerStarted","Data":"daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2"} Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.839678 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" event={"ID":"48d1849a-21af-4e5a-aa40-63755cb2096c","Type":"ContainerStarted","Data":"162c15645e3dd0cc1ce2b829b26faf6338f65f119cd11b7763800e8aab3d672c"} Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.860415 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8lq7" event={"ID":"408c9cd8-1d91-4a4b-9e57-748578b4704e","Type":"ContainerStarted","Data":"3a72943e84826204725d708686f34804937182517d11682dc95d8625a81150d4"} Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.874996 4895 generic.go:334] "Generic (PLEG): container finished" podID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" containerID="b1b1ea807caa2f2048162d9490c5427790d2e8ccba364f7b2ba58b0d5caf4da9" exitCode=0 Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.875156 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhpc" event={"ID":"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81","Type":"ContainerDied","Data":"b1b1ea807caa2f2048162d9490c5427790d2e8ccba364f7b2ba58b0d5caf4da9"} Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.892544 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" event={"ID":"ae39172d-1e70-4e7c-8222-0cdaf8e645d6","Type":"ContainerDied","Data":"1625011d40cbcb846aba1086006d4ff223f781d916e4a16becb24b8d3e037b24"} Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.892576 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pmkl9" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.892634 4895 scope.go:117] "RemoveContainer" containerID="1552840d0d3bfc3dd9662620943b4dcfb084eb0fdcd40e97135d818a362a0278" Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.898679 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8mm9" event={"ID":"289a9c3e-eaaf-434d-87d3-24097a01057e","Type":"ContainerStarted","Data":"852aa7c1e0558dbca9e0c8584b01df40af413373b2047805ff10e69ba15246e9"} Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.919591 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m467m" event={"ID":"8c405329-c382-44a2-8c9d-74976164f122","Type":"ContainerStarted","Data":"c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9"} Jan 29 16:15:36 crc kubenswrapper[4895]: I0129 16:15:36.954094 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdds6" event={"ID":"adf371b3-ba58-4be0-a05c-b88d01ffc60d","Type":"ContainerStarted","Data":"86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c"} Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.180007 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pmkl9"] Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.187100 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pmkl9"] Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.962823 4895 generic.go:334] "Generic (PLEG): container finished" podID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerID="5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a" exitCode=0 Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.962985 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4slsz" event={"ID":"f5ead4e3-bcde-4c47-8173-9f7773f0a45f","Type":"ContainerDied","Data":"5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a"} Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.968240 4895 generic.go:334] "Generic (PLEG): container finished" podID="408c9cd8-1d91-4a4b-9e57-748578b4704e" containerID="3a72943e84826204725d708686f34804937182517d11682dc95d8625a81150d4" exitCode=0 Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.968309 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8lq7" event={"ID":"408c9cd8-1d91-4a4b-9e57-748578b4704e","Type":"ContainerDied","Data":"3a72943e84826204725d708686f34804937182517d11682dc95d8625a81150d4"} Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.977086 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhpc" event={"ID":"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81","Type":"ContainerStarted","Data":"9f8b74e2b5af7ee1e9f637b60eba673cdd330c0293eb25dd5779ace1d176c0c9"} Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.980491 4895 generic.go:334] "Generic (PLEG): container finished" podID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" containerID="daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2" exitCode=0 Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.980572 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tj74" event={"ID":"044df5fd-0d96-4aab-b09e-24870d0e4bd9","Type":"ContainerDied","Data":"daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2"} Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.984589 4895 generic.go:334] "Generic (PLEG): container finished" podID="289a9c3e-eaaf-434d-87d3-24097a01057e" containerID="852aa7c1e0558dbca9e0c8584b01df40af413373b2047805ff10e69ba15246e9" exitCode=0 Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.984679 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8mm9" event={"ID":"289a9c3e-eaaf-434d-87d3-24097a01057e","Type":"ContainerDied","Data":"852aa7c1e0558dbca9e0c8584b01df40af413373b2047805ff10e69ba15246e9"} Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.987157 4895 generic.go:334] "Generic (PLEG): container finished" podID="8c405329-c382-44a2-8c9d-74976164f122" containerID="c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9" exitCode=0 Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.987212 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m467m" event={"ID":"8c405329-c382-44a2-8c9d-74976164f122","Type":"ContainerDied","Data":"c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9"} Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.990505 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" event={"ID":"48d1849a-21af-4e5a-aa40-63755cb2096c","Type":"ContainerStarted","Data":"2faa3045fa17ab3c38c480278d529265d3766bfc8e66674dbe4d0fa145ea5180"} Jan 29 16:15:37 crc kubenswrapper[4895]: I0129 16:15:37.991200 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:38 crc kubenswrapper[4895]: I0129 16:15:37.999713 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" Jan 29 16:15:38 crc kubenswrapper[4895]: I0129 16:15:38.025072 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55c7c67b6b-5gtxd" podStartSLOduration=28.025041244 podStartE2EDuration="28.025041244s" podCreationTimestamp="2026-01-29 16:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:15:38.019967217 +0000 UTC m=+221.822944481" watchObservedRunningTime="2026-01-29 16:15:38.025041244 +0000 UTC m=+221.828018518" Jan 29 16:15:38 crc kubenswrapper[4895]: I0129 16:15:38.049831 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7nhpc" podStartSLOduration=3.317761433 podStartE2EDuration="1m12.049800165s" podCreationTimestamp="2026-01-29 16:14:26 +0000 UTC" firstStartedPulling="2026-01-29 16:14:28.767154799 +0000 UTC m=+152.570132063" lastFinishedPulling="2026-01-29 16:15:37.499193531 +0000 UTC m=+221.302170795" observedRunningTime="2026-01-29 16:15:38.03892537 +0000 UTC m=+221.841902674" watchObservedRunningTime="2026-01-29 16:15:38.049800165 +0000 UTC m=+221.852777449" Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:38.999366 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tj74" event={"ID":"044df5fd-0d96-4aab-b09e-24870d0e4bd9","Type":"ContainerStarted","Data":"80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281"} Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.002449 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8mm9" event={"ID":"289a9c3e-eaaf-434d-87d3-24097a01057e","Type":"ContainerStarted","Data":"e1a984f1d0448b6f5bf7ee0e44ec1e0fedcd9704d616ea01038937adb8b3c515"} Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.005976 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m467m" event={"ID":"8c405329-c382-44a2-8c9d-74976164f122","Type":"ContainerStarted","Data":"5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f"} Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.008347 4895 generic.go:334] "Generic (PLEG): container finished" podID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" containerID="86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c" exitCode=0 Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.008493 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdds6" event={"ID":"adf371b3-ba58-4be0-a05c-b88d01ffc60d","Type":"ContainerDied","Data":"86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c"} Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.016903 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4slsz" event={"ID":"f5ead4e3-bcde-4c47-8173-9f7773f0a45f","Type":"ContainerStarted","Data":"4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b"} Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.034443 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8lq7" event={"ID":"408c9cd8-1d91-4a4b-9e57-748578b4704e","Type":"ContainerStarted","Data":"c2308180fd827a5c32b1af5b88cb3add6e89a005240808fab6d24647009f1071"} Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.048488 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4tj74" podStartSLOduration=2.055541963 podStartE2EDuration="1m13.048461575s" podCreationTimestamp="2026-01-29 16:14:26 +0000 UTC" firstStartedPulling="2026-01-29 16:14:27.698219827 +0000 UTC m=+151.501197091" lastFinishedPulling="2026-01-29 16:15:38.691139439 +0000 UTC m=+222.494116703" observedRunningTime="2026-01-29 16:15:39.026014466 +0000 UTC m=+222.828991730" watchObservedRunningTime="2026-01-29 16:15:39.048461575 +0000 UTC m=+222.851438839" Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.070427 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae39172d-1e70-4e7c-8222-0cdaf8e645d6" path="/var/lib/kubelet/pods/ae39172d-1e70-4e7c-8222-0cdaf8e645d6/volumes" Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.071724 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m467m" podStartSLOduration=3.463248985 podStartE2EDuration="1m12.071697534s" podCreationTimestamp="2026-01-29 16:14:27 +0000 UTC" firstStartedPulling="2026-01-29 16:14:29.862349928 +0000 UTC m=+153.665327192" lastFinishedPulling="2026-01-29 16:15:38.470798477 +0000 UTC m=+222.273775741" observedRunningTime="2026-01-29 16:15:39.067673956 +0000 UTC m=+222.870651230" watchObservedRunningTime="2026-01-29 16:15:39.071697534 +0000 UTC m=+222.874674798" Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.095663 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t8mm9" podStartSLOduration=2.258861133 podStartE2EDuration="1m13.095638834s" podCreationTimestamp="2026-01-29 16:14:26 +0000 UTC" firstStartedPulling="2026-01-29 16:14:27.687115906 +0000 UTC m=+151.490093180" lastFinishedPulling="2026-01-29 16:15:38.523893617 +0000 UTC m=+222.326870881" observedRunningTime="2026-01-29 16:15:39.091428559 +0000 UTC m=+222.894405843" watchObservedRunningTime="2026-01-29 16:15:39.095638834 +0000 UTC m=+222.898616088" Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.149953 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4slsz" podStartSLOduration=3.265363391 podStartE2EDuration="1m14.149931685s" podCreationTimestamp="2026-01-29 16:14:25 +0000 UTC" firstStartedPulling="2026-01-29 16:14:27.683988002 +0000 UTC m=+151.486965266" lastFinishedPulling="2026-01-29 16:15:38.568556296 +0000 UTC m=+222.371533560" observedRunningTime="2026-01-29 16:15:39.147787236 +0000 UTC m=+222.950764500" watchObservedRunningTime="2026-01-29 16:15:39.149931685 +0000 UTC m=+222.952908949" Jan 29 16:15:39 crc kubenswrapper[4895]: E0129 16:15:39.160876 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06277e29_59af_446a_81a3_b3b8b1b5ab0a.slice/crio-9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:15:39 crc kubenswrapper[4895]: I0129 16:15:39.170613 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p8lq7" podStartSLOduration=2.597852903 podStartE2EDuration="1m11.170584144s" podCreationTimestamp="2026-01-29 16:14:28 +0000 UTC" firstStartedPulling="2026-01-29 16:14:29.826121356 +0000 UTC m=+153.629098620" lastFinishedPulling="2026-01-29 16:15:38.398852597 +0000 UTC m=+222.201829861" observedRunningTime="2026-01-29 16:15:39.169856675 +0000 UTC m=+222.972833959" watchObservedRunningTime="2026-01-29 16:15:39.170584144 +0000 UTC m=+222.973561408" Jan 29 16:15:40 crc kubenswrapper[4895]: I0129 16:15:40.041719 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdds6" event={"ID":"adf371b3-ba58-4be0-a05c-b88d01ffc60d","Type":"ContainerStarted","Data":"d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d"} Jan 29 16:15:40 crc kubenswrapper[4895]: I0129 16:15:40.044001 4895 generic.go:334] "Generic (PLEG): container finished" podID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" containerID="9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7" exitCode=0 Jan 29 16:15:40 crc kubenswrapper[4895]: I0129 16:15:40.044204 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9nrd" event={"ID":"06277e29-59af-446a-81a3-b3b8b1b5ab0a","Type":"ContainerDied","Data":"9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7"} Jan 29 16:15:40 crc kubenswrapper[4895]: I0129 16:15:40.067179 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zdds6" podStartSLOduration=2.442427219 podStartE2EDuration="1m11.067159316s" podCreationTimestamp="2026-01-29 16:14:29 +0000 UTC" firstStartedPulling="2026-01-29 16:14:30.929972159 +0000 UTC m=+154.732949413" lastFinishedPulling="2026-01-29 16:15:39.554704246 +0000 UTC m=+223.357681510" observedRunningTime="2026-01-29 16:15:40.064777353 +0000 UTC m=+223.867754617" watchObservedRunningTime="2026-01-29 16:15:40.067159316 +0000 UTC m=+223.870136580" Jan 29 16:15:41 crc kubenswrapper[4895]: I0129 16:15:41.054629 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9nrd" event={"ID":"06277e29-59af-446a-81a3-b3b8b1b5ab0a","Type":"ContainerStarted","Data":"a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da"} Jan 29 16:15:41 crc kubenswrapper[4895]: I0129 16:15:41.085324 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z9nrd" podStartSLOduration=2.281081201 podStartE2EDuration="1m12.085294404s" podCreationTimestamp="2026-01-29 16:14:29 +0000 UTC" firstStartedPulling="2026-01-29 16:14:30.925896303 +0000 UTC m=+154.728873567" lastFinishedPulling="2026-01-29 16:15:40.730109506 +0000 UTC m=+224.533086770" observedRunningTime="2026-01-29 16:15:41.082800386 +0000 UTC m=+224.885777650" watchObservedRunningTime="2026-01-29 16:15:41.085294404 +0000 UTC m=+224.888271688" Jan 29 16:15:45 crc kubenswrapper[4895]: I0129 16:15:45.965072 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:15:45 crc kubenswrapper[4895]: I0129 16:15:45.965173 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:15:46 crc kubenswrapper[4895]: I0129 16:15:46.182803 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:15:46 crc kubenswrapper[4895]: I0129 16:15:46.254700 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:15:46 crc kubenswrapper[4895]: I0129 16:15:46.471459 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:15:46 crc kubenswrapper[4895]: I0129 16:15:46.471581 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:15:46 crc kubenswrapper[4895]: I0129 16:15:46.541604 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:15:46 crc kubenswrapper[4895]: I0129 16:15:46.664580 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:15:46 crc kubenswrapper[4895]: I0129 16:15:46.664658 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:15:46 crc kubenswrapper[4895]: I0129 16:15:46.723111 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:15:46 crc kubenswrapper[4895]: I0129 16:15:46.839424 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:15:46 crc kubenswrapper[4895]: I0129 16:15:46.839819 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:15:46 crc kubenswrapper[4895]: I0129 16:15:46.893553 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:15:47 crc kubenswrapper[4895]: I0129 16:15:47.139951 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:15:47 crc kubenswrapper[4895]: I0129 16:15:47.142185 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:15:47 crc kubenswrapper[4895]: I0129 16:15:47.148546 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:15:47 crc kubenswrapper[4895]: I0129 16:15:47.681006 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t8mm9"] Jan 29 16:15:48 crc kubenswrapper[4895]: I0129 16:15:48.243324 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:15:48 crc kubenswrapper[4895]: I0129 16:15:48.244526 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:15:48 crc kubenswrapper[4895]: I0129 16:15:48.287214 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:15:48 crc kubenswrapper[4895]: I0129 16:15:48.569667 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:15:48 crc kubenswrapper[4895]: I0129 16:15:48.569742 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:15:48 crc kubenswrapper[4895]: I0129 16:15:48.630187 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:15:49 crc kubenswrapper[4895]: I0129 16:15:49.076568 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7nhpc"] Jan 29 16:15:49 crc kubenswrapper[4895]: I0129 16:15:49.108853 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7nhpc" podUID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" containerName="registry-server" containerID="cri-o://9f8b74e2b5af7ee1e9f637b60eba673cdd330c0293eb25dd5779ace1d176c0c9" gracePeriod=2 Jan 29 16:15:49 crc kubenswrapper[4895]: I0129 16:15:49.110307 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t8mm9" podUID="289a9c3e-eaaf-434d-87d3-24097a01057e" containerName="registry-server" containerID="cri-o://e1a984f1d0448b6f5bf7ee0e44ec1e0fedcd9704d616ea01038937adb8b3c515" gracePeriod=2 Jan 29 16:15:49 crc kubenswrapper[4895]: I0129 16:15:49.155007 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:15:49 crc kubenswrapper[4895]: I0129 16:15:49.160025 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:15:49 crc kubenswrapper[4895]: I0129 16:15:49.421363 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:15:49 crc kubenswrapper[4895]: I0129 16:15:49.421473 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:15:49 crc kubenswrapper[4895]: I0129 16:15:49.474909 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:15:49 crc kubenswrapper[4895]: I0129 16:15:49.792016 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:15:49 crc kubenswrapper[4895]: I0129 16:15:49.792081 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:15:49 crc kubenswrapper[4895]: I0129 16:15:49.841545 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.118719 4895 generic.go:334] "Generic (PLEG): container finished" podID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" containerID="9f8b74e2b5af7ee1e9f637b60eba673cdd330c0293eb25dd5779ace1d176c0c9" exitCode=0 Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.118792 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhpc" event={"ID":"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81","Type":"ContainerDied","Data":"9f8b74e2b5af7ee1e9f637b60eba673cdd330c0293eb25dd5779ace1d176c0c9"} Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.122163 4895 generic.go:334] "Generic (PLEG): container finished" podID="289a9c3e-eaaf-434d-87d3-24097a01057e" containerID="e1a984f1d0448b6f5bf7ee0e44ec1e0fedcd9704d616ea01038937adb8b3c515" exitCode=0 Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.122406 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8mm9" event={"ID":"289a9c3e-eaaf-434d-87d3-24097a01057e","Type":"ContainerDied","Data":"e1a984f1d0448b6f5bf7ee0e44ec1e0fedcd9704d616ea01038937adb8b3c515"} Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.176130 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.197041 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.512892 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.520256 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.592999 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7v95\" (UniqueName: \"kubernetes.io/projected/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-kube-api-access-m7v95\") pod \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.593071 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-utilities\") pod \"289a9c3e-eaaf-434d-87d3-24097a01057e\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.593101 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-utilities\") pod \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.593137 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnvlx\" (UniqueName: \"kubernetes.io/projected/289a9c3e-eaaf-434d-87d3-24097a01057e-kube-api-access-cnvlx\") pod \"289a9c3e-eaaf-434d-87d3-24097a01057e\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.593246 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-catalog-content\") pod \"289a9c3e-eaaf-434d-87d3-24097a01057e\" (UID: \"289a9c3e-eaaf-434d-87d3-24097a01057e\") " Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.593276 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-catalog-content\") pod \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\" (UID: \"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81\") " Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.594253 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-utilities" (OuterVolumeSpecName: "utilities") pod "289a9c3e-eaaf-434d-87d3-24097a01057e" (UID: "289a9c3e-eaaf-434d-87d3-24097a01057e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.594321 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-utilities" (OuterVolumeSpecName: "utilities") pod "45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" (UID: "45b5c07a-bde1-4ffb-bf31-9c7c296d7f81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.600819 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289a9c3e-eaaf-434d-87d3-24097a01057e-kube-api-access-cnvlx" (OuterVolumeSpecName: "kube-api-access-cnvlx") pod "289a9c3e-eaaf-434d-87d3-24097a01057e" (UID: "289a9c3e-eaaf-434d-87d3-24097a01057e"). InnerVolumeSpecName "kube-api-access-cnvlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.601355 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-kube-api-access-m7v95" (OuterVolumeSpecName: "kube-api-access-m7v95") pod "45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" (UID: "45b5c07a-bde1-4ffb-bf31-9c7c296d7f81"). InnerVolumeSpecName "kube-api-access-m7v95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.641015 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" (UID: "45b5c07a-bde1-4ffb-bf31-9c7c296d7f81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.653565 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "289a9c3e-eaaf-434d-87d3-24097a01057e" (UID: "289a9c3e-eaaf-434d-87d3-24097a01057e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.694960 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.695005 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.695019 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7v95\" (UniqueName: \"kubernetes.io/projected/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-kube-api-access-m7v95\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.695034 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289a9c3e-eaaf-434d-87d3-24097a01057e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.695044 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:50 crc kubenswrapper[4895]: I0129 16:15:50.695058 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnvlx\" (UniqueName: \"kubernetes.io/projected/289a9c3e-eaaf-434d-87d3-24097a01057e-kube-api-access-cnvlx\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.131478 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nhpc" Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.131646 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhpc" event={"ID":"45b5c07a-bde1-4ffb-bf31-9c7c296d7f81","Type":"ContainerDied","Data":"73a3878983dd7ee858f222fef7c58fbd58d74787af531d2642a620c110900bb3"} Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.132069 4895 scope.go:117] "RemoveContainer" containerID="9f8b74e2b5af7ee1e9f637b60eba673cdd330c0293eb25dd5779ace1d176c0c9" Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.135725 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t8mm9" event={"ID":"289a9c3e-eaaf-434d-87d3-24097a01057e","Type":"ContainerDied","Data":"a60efd50206675aba52fa33471a6041f265ffb906f9ffde4fbaafb8292cfe857"} Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.135857 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t8mm9" Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.157827 4895 scope.go:117] "RemoveContainer" containerID="b1b1ea807caa2f2048162d9490c5427790d2e8ccba364f7b2ba58b0d5caf4da9" Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.160426 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7nhpc"] Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.167085 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7nhpc"] Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.173223 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t8mm9"] Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.175675 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t8mm9"] Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.177883 4895 scope.go:117] "RemoveContainer" containerID="baff8f969c1e1def9f3cf43a969b8505763796e5fdf5cee4bebf8453048e0a79" Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.195890 4895 scope.go:117] "RemoveContainer" containerID="e1a984f1d0448b6f5bf7ee0e44ec1e0fedcd9704d616ea01038937adb8b3c515" Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.212752 4895 scope.go:117] "RemoveContainer" containerID="852aa7c1e0558dbca9e0c8584b01df40af413373b2047805ff10e69ba15246e9" Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.231644 4895 scope.go:117] "RemoveContainer" containerID="413d232afee33b9bb3f2803f5bf0fd749535c80cbdd7bad686fce1adf799ce83" Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.474822 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p8lq7"] Jan 29 16:15:51 crc kubenswrapper[4895]: I0129 16:15:51.475649 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p8lq7" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" containerName="registry-server" containerID="cri-o://c2308180fd827a5c32b1af5b88cb3add6e89a005240808fab6d24647009f1071" gracePeriod=2 Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.045843 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289a9c3e-eaaf-434d-87d3-24097a01057e" path="/var/lib/kubelet/pods/289a9c3e-eaaf-434d-87d3-24097a01057e/volumes" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.046842 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" path="/var/lib/kubelet/pods/45b5c07a-bde1-4ffb-bf31-9c7c296d7f81/volumes" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.156643 4895 generic.go:334] "Generic (PLEG): container finished" podID="408c9cd8-1d91-4a4b-9e57-748578b4704e" containerID="c2308180fd827a5c32b1af5b88cb3add6e89a005240808fab6d24647009f1071" exitCode=0 Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.156729 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8lq7" event={"ID":"408c9cd8-1d91-4a4b-9e57-748578b4704e","Type":"ContainerDied","Data":"c2308180fd827a5c32b1af5b88cb3add6e89a005240808fab6d24647009f1071"} Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.518244 4895 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.519278 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289a9c3e-eaaf-434d-87d3-24097a01057e" containerName="extract-content" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.519313 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="289a9c3e-eaaf-434d-87d3-24097a01057e" containerName="extract-content" Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.519332 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" containerName="extract-utilities" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.519348 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" containerName="extract-utilities" Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.519369 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" containerName="extract-content" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.519380 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" containerName="extract-content" Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.519405 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289a9c3e-eaaf-434d-87d3-24097a01057e" containerName="registry-server" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.519415 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="289a9c3e-eaaf-434d-87d3-24097a01057e" containerName="registry-server" Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.519432 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289a9c3e-eaaf-434d-87d3-24097a01057e" containerName="extract-utilities" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.519443 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="289a9c3e-eaaf-434d-87d3-24097a01057e" containerName="extract-utilities" Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.519466 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" containerName="registry-server" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.519476 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" containerName="registry-server" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.519662 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b5c07a-bde1-4ffb-bf31-9c7c296d7f81" containerName="registry-server" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.519683 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="289a9c3e-eaaf-434d-87d3-24097a01057e" containerName="registry-server" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.520324 4895 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.520505 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.520637 4895 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.520773 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca" gracePeriod=15 Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.520812 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e" gracePeriod=15 Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.520907 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0" gracePeriod=15 Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.521016 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b" gracePeriod=15 Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.521005 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105" gracePeriod=15 Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.521686 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.521743 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.521768 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.521780 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.521800 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.521811 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.521825 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.521837 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.521860 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.521894 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.521916 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.521959 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:15:53 crc kubenswrapper[4895]: E0129 16:15:53.521979 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.521991 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.522197 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.522221 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.522238 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.522251 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.522268 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.522592 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.525495 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.533540 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.533616 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.533670 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.533761 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.533940 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.635610 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.635702 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.635729 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.635752 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.635967 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.636022 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.636043 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.636059 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.636109 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.635919 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.636152 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.636175 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.636197 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.737707 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.737767 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.737801 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.737911 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.737958 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.738003 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.800500 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.801637 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.839273 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-utilities\") pod \"408c9cd8-1d91-4a4b-9e57-748578b4704e\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.839785 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-catalog-content\") pod \"408c9cd8-1d91-4a4b-9e57-748578b4704e\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.840170 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-utilities" (OuterVolumeSpecName: "utilities") pod "408c9cd8-1d91-4a4b-9e57-748578b4704e" (UID: "408c9cd8-1d91-4a4b-9e57-748578b4704e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.845072 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkxs8\" (UniqueName: \"kubernetes.io/projected/408c9cd8-1d91-4a4b-9e57-748578b4704e-kube-api-access-kkxs8\") pod \"408c9cd8-1d91-4a4b-9e57-748578b4704e\" (UID: \"408c9cd8-1d91-4a4b-9e57-748578b4704e\") " Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.845443 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.852107 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408c9cd8-1d91-4a4b-9e57-748578b4704e-kube-api-access-kkxs8" (OuterVolumeSpecName: "kube-api-access-kkxs8") pod "408c9cd8-1d91-4a4b-9e57-748578b4704e" (UID: "408c9cd8-1d91-4a4b-9e57-748578b4704e"). InnerVolumeSpecName "kube-api-access-kkxs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.865224 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "408c9cd8-1d91-4a4b-9e57-748578b4704e" (UID: "408c9cd8-1d91-4a4b-9e57-748578b4704e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.949113 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408c9cd8-1d91-4a4b-9e57-748578b4704e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:53 crc kubenswrapper[4895]: I0129 16:15:53.949165 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkxs8\" (UniqueName: \"kubernetes.io/projected/408c9cd8-1d91-4a4b-9e57-748578b4704e-kube-api-access-kkxs8\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.168047 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8lq7" event={"ID":"408c9cd8-1d91-4a4b-9e57-748578b4704e","Type":"ContainerDied","Data":"67f07a8ad1b5922bf338a3f87b1a07e12c6823e0d95117bfd033eaf580bddfa2"} Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.168100 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p8lq7" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.168145 4895 scope.go:117] "RemoveContainer" containerID="c2308180fd827a5c32b1af5b88cb3add6e89a005240808fab6d24647009f1071" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.170301 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.171749 4895 generic.go:334] "Generic (PLEG): container finished" podID="18cb8033-bd36-4b53-8f71-7b2d8d527270" containerID="cfb9b7dfdf6d7eac717788f435405dc18bb16a400a7562c6af59ad05a13dbac9" exitCode=0 Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.171810 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"18cb8033-bd36-4b53-8f71-7b2d8d527270","Type":"ContainerDied","Data":"cfb9b7dfdf6d7eac717788f435405dc18bb16a400a7562c6af59ad05a13dbac9"} Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.173139 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.173630 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.174324 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.175644 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.177148 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0" exitCode=0 Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.177176 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105" exitCode=0 Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.177187 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e" exitCode=0 Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.177198 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b" exitCode=2 Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.185146 4895 scope.go:117] "RemoveContainer" containerID="3a72943e84826204725d708686f34804937182517d11682dc95d8625a81150d4" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.188379 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.188933 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.203668 4895 scope.go:117] "RemoveContainer" containerID="ee2c9cccd2626ba5b9215dd1da38d60ffc398eb5e6971b7ff86866a7d49fac65" Jan 29 16:15:54 crc kubenswrapper[4895]: I0129 16:15:54.235456 4895 scope.go:117] "RemoveContainer" containerID="608b1c44f8e62e7328b62a04f5c81de60ae5fbfb3f6d3393be5d46d7df3c71bb" Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.189192 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.536822 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.537769 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.538128 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.573055 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-kubelet-dir\") pod \"18cb8033-bd36-4b53-8f71-7b2d8d527270\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.573114 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-var-lock\") pod \"18cb8033-bd36-4b53-8f71-7b2d8d527270\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.573227 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18cb8033-bd36-4b53-8f71-7b2d8d527270-kube-api-access\") pod \"18cb8033-bd36-4b53-8f71-7b2d8d527270\" (UID: \"18cb8033-bd36-4b53-8f71-7b2d8d527270\") " Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.573644 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "18cb8033-bd36-4b53-8f71-7b2d8d527270" (UID: "18cb8033-bd36-4b53-8f71-7b2d8d527270"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.573795 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-var-lock" (OuterVolumeSpecName: "var-lock") pod "18cb8033-bd36-4b53-8f71-7b2d8d527270" (UID: "18cb8033-bd36-4b53-8f71-7b2d8d527270"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.583472 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18cb8033-bd36-4b53-8f71-7b2d8d527270-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "18cb8033-bd36-4b53-8f71-7b2d8d527270" (UID: "18cb8033-bd36-4b53-8f71-7b2d8d527270"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.675054 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/18cb8033-bd36-4b53-8f71-7b2d8d527270-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.675098 4895 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:55 crc kubenswrapper[4895]: I0129 16:15:55.675109 4895 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/18cb8033-bd36-4b53-8f71-7b2d8d527270-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.203199 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.205133 4895 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca" exitCode=0 Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.207601 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"18cb8033-bd36-4b53-8f71-7b2d8d527270","Type":"ContainerDied","Data":"5c914e932b21987209df6f2cd235f85edec0c1daa8731a9e521c079947585921"} Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.207669 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c914e932b21987209df6f2cd235f85edec0c1daa8731a9e521c079947585921" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.207688 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.235287 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.236161 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.598113 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.600087 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.600698 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.602207 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.602494 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.687640 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.687788 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.688219 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.688253 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.688322 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.688373 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.688577 4895 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.688594 4895 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:56 crc kubenswrapper[4895]: I0129 16:15:56.688604 4895 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:57 crc kubenswrapper[4895]: E0129 16:15:57.015950 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: E0129 16:15:57.016628 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: E0129 16:15:57.017148 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: E0129 16:15:57.017492 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: E0129 16:15:57.017729 4895 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.017764 4895 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 16:15:57 crc kubenswrapper[4895]: E0129 16:15:57.018134 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="200ms" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.083120 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.083188 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.083986 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.084480 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: E0129 16:15:57.218760 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="400ms" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.230758 4895 scope.go:117] "RemoveContainer" containerID="0d27e37b6ec9b55094c83a9908b9bc6d3f9f96c52a6bc8fb0b279a1126aabda0" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.230892 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.232017 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.232504 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.232896 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.240487 4895 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.241238 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.241494 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.254766 4895 scope.go:117] "RemoveContainer" containerID="c1c452a93555d55fd3bcaffc9ae26d77747c3018cb1c01047bd3d641e609c105" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.269150 4895 scope.go:117] "RemoveContainer" containerID="8febf23c04c3cb94d0abf0b827255932667ae3e890ec71da29ab08e1343f2a4e" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.283756 4895 scope.go:117] "RemoveContainer" containerID="0431a79b482b7bd976ca56cc434f377da39c0a7c57ec953a690136b72098377b" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.294896 4895 scope.go:117] "RemoveContainer" containerID="a0b337ab477f97a96934ad6478ee51c2ff3bc311429e5f620cd3a30219d33cca" Jan 29 16:15:57 crc kubenswrapper[4895]: I0129 16:15:57.310739 4895 scope.go:117] "RemoveContainer" containerID="7179712e54010faa169f51cb5a9cfa56bf2e1352820246e25eeb853231450639" Jan 29 16:15:57 crc kubenswrapper[4895]: E0129 16:15:57.620629 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="800ms" Jan 29 16:15:58 crc kubenswrapper[4895]: E0129 16:15:58.421635 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="1.6s" Jan 29 16:15:58 crc kubenswrapper[4895]: E0129 16:15:58.577341 4895 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:58 crc kubenswrapper[4895]: I0129 16:15:58.577892 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:15:58 crc kubenswrapper[4895]: W0129 16:15:58.609531 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bb9d8df675340066d3ad73f4ed6ffa11b16220bd416658e45d4c13dd244e12c0 WatchSource:0}: Error finding container bb9d8df675340066d3ad73f4ed6ffa11b16220bd416658e45d4c13dd244e12c0: Status 404 returned error can't find the container with id bb9d8df675340066d3ad73f4ed6ffa11b16220bd416658e45d4c13dd244e12c0 Jan 29 16:15:58 crc kubenswrapper[4895]: E0129 16:15:58.614484 4895 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3fd40fec5267 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:15:58.612656743 +0000 UTC m=+242.415634007,LastTimestamp:2026-01-29 16:15:58.612656743 +0000 UTC m=+242.415634007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:15:59 crc kubenswrapper[4895]: I0129 16:15:59.253792 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad"} Jan 29 16:15:59 crc kubenswrapper[4895]: I0129 16:15:59.253851 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bb9d8df675340066d3ad73f4ed6ffa11b16220bd416658e45d4c13dd244e12c0"} Jan 29 16:15:59 crc kubenswrapper[4895]: I0129 16:15:59.254764 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:59 crc kubenswrapper[4895]: I0129 16:15:59.254947 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:15:59 crc kubenswrapper[4895]: E0129 16:15:59.254931 4895 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:16:00 crc kubenswrapper[4895]: E0129 16:16:00.023257 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="3.2s" Jan 29 16:16:00 crc kubenswrapper[4895]: E0129 16:16:00.139796 4895 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" volumeName="registry-storage" Jan 29 16:16:03 crc kubenswrapper[4895]: E0129 16:16:03.228446 4895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="6.4s" Jan 29 16:16:05 crc kubenswrapper[4895]: E0129 16:16:05.277771 4895 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3fd40fec5267 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:15:58.612656743 +0000 UTC m=+242.415634007,LastTimestamp:2026-01-29 16:15:58.612656743 +0000 UTC m=+242.415634007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:16:06 crc kubenswrapper[4895]: I0129 16:16:06.312290 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:16:06 crc kubenswrapper[4895]: I0129 16:16:06.312356 4895 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e" exitCode=1 Jan 29 16:16:06 crc kubenswrapper[4895]: I0129 16:16:06.312428 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e"} Jan 29 16:16:06 crc kubenswrapper[4895]: I0129 16:16:06.313247 4895 scope.go:117] "RemoveContainer" containerID="6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e" Jan 29 16:16:06 crc kubenswrapper[4895]: I0129 16:16:06.314445 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:06 crc kubenswrapper[4895]: I0129 16:16:06.314932 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:06 crc kubenswrapper[4895]: I0129 16:16:06.315606 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.036384 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.039578 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.040394 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.040705 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.041225 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.041553 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.041932 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.055890 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de1f2b4f-67c3-46de-b3c9-6d005a487da5" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.055930 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de1f2b4f-67c3-46de-b3c9-6d005a487da5" Jan 29 16:16:07 crc kubenswrapper[4895]: E0129 16:16:07.056550 4895 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.057218 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:16:07 crc kubenswrapper[4895]: W0129 16:16:07.080298 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-3163e08d4a87077ade7a1ca7d6dd60bb57f80d2eb6fac956c1f9b5689db5db66 WatchSource:0}: Error finding container 3163e08d4a87077ade7a1ca7d6dd60bb57f80d2eb6fac956c1f9b5689db5db66: Status 404 returned error can't find the container with id 3163e08d4a87077ade7a1ca7d6dd60bb57f80d2eb6fac956c1f9b5689db5db66 Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.322074 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3163e08d4a87077ade7a1ca7d6dd60bb57f80d2eb6fac956c1f9b5689db5db66"} Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.340954 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.341024 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"766efcacc3dcae6b6f922b8cda086bfb166fbca341394fe9e51d7859efbbbd6d"} Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.343103 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.343632 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:07 crc kubenswrapper[4895]: I0129 16:16:07.343927 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:08 crc kubenswrapper[4895]: I0129 16:16:08.358542 4895 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c4e24d5fdca8099e598d0af051bb0e097ca8cce34aaf8be76f15eb2b28e6d24d" exitCode=0 Jan 29 16:16:08 crc kubenswrapper[4895]: I0129 16:16:08.358717 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c4e24d5fdca8099e598d0af051bb0e097ca8cce34aaf8be76f15eb2b28e6d24d"} Jan 29 16:16:08 crc kubenswrapper[4895]: I0129 16:16:08.359945 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de1f2b4f-67c3-46de-b3c9-6d005a487da5" Jan 29 16:16:08 crc kubenswrapper[4895]: I0129 16:16:08.360000 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de1f2b4f-67c3-46de-b3c9-6d005a487da5" Jan 29 16:16:08 crc kubenswrapper[4895]: I0129 16:16:08.360092 4895 status_manager.go:851] "Failed to get status for pod" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:08 crc kubenswrapper[4895]: E0129 16:16:08.360426 4895 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:16:08 crc kubenswrapper[4895]: I0129 16:16:08.360937 4895 status_manager.go:851] "Failed to get status for pod" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" pod="openshift-marketplace/redhat-marketplace-p8lq7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-p8lq7\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:08 crc kubenswrapper[4895]: I0129 16:16:08.361335 4895 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Jan 29 16:16:09 crc kubenswrapper[4895]: I0129 16:16:09.369694 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"40cc9f7c38d644b222d20f81ca200052242552c8c8c74a58037063acb96349df"} Jan 29 16:16:09 crc kubenswrapper[4895]: I0129 16:16:09.370120 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a4e9ff3c26841603805d66c110b66c479e45889b19e72c3298edb83a721e65f6"} Jan 29 16:16:09 crc kubenswrapper[4895]: I0129 16:16:09.370135 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0f9ae413fe7eadfb4a7d62ffbc9ec4121873764555c0b960b948095f2c6b0f09"} Jan 29 16:16:09 crc kubenswrapper[4895]: I0129 16:16:09.370147 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5b0805f352858ecf3b06c5147a7cc976d514a558aed2e4660964fe482bea323f"} Jan 29 16:16:09 crc kubenswrapper[4895]: I0129 16:16:09.520890 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:16:09 crc kubenswrapper[4895]: I0129 16:16:09.521565 4895 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:16:09 crc kubenswrapper[4895]: I0129 16:16:09.521667 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:16:10 crc kubenswrapper[4895]: I0129 16:16:10.422203 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"97352e278c874c3c7fd390e12df65b0432ab438fd3d9d14a3d8cf2c757fdd987"} Jan 29 16:16:10 crc kubenswrapper[4895]: I0129 16:16:10.422700 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:16:10 crc kubenswrapper[4895]: I0129 16:16:10.422763 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de1f2b4f-67c3-46de-b3c9-6d005a487da5" Jan 29 16:16:10 crc kubenswrapper[4895]: I0129 16:16:10.422784 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de1f2b4f-67c3-46de-b3c9-6d005a487da5" Jan 29 16:16:12 crc kubenswrapper[4895]: I0129 16:16:12.057667 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:16:12 crc kubenswrapper[4895]: I0129 16:16:12.057730 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:16:12 crc kubenswrapper[4895]: I0129 16:16:12.066942 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:16:12 crc kubenswrapper[4895]: I0129 16:16:12.251640 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:16:15 crc kubenswrapper[4895]: I0129 16:16:15.434797 4895 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:16:16 crc kubenswrapper[4895]: I0129 16:16:16.470920 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de1f2b4f-67c3-46de-b3c9-6d005a487da5" Jan 29 16:16:16 crc kubenswrapper[4895]: I0129 16:16:16.471374 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de1f2b4f-67c3-46de-b3c9-6d005a487da5" Jan 29 16:16:16 crc kubenswrapper[4895]: I0129 16:16:16.477856 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:16:17 crc kubenswrapper[4895]: I0129 16:16:17.050748 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="18b02581-cc92-4640-bb72-87179f269050" Jan 29 16:16:17 crc kubenswrapper[4895]: I0129 16:16:17.476365 4895 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de1f2b4f-67c3-46de-b3c9-6d005a487da5" Jan 29 16:16:17 crc kubenswrapper[4895]: I0129 16:16:17.476410 4895 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="de1f2b4f-67c3-46de-b3c9-6d005a487da5" Jan 29 16:16:17 crc kubenswrapper[4895]: I0129 16:16:17.480032 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="18b02581-cc92-4640-bb72-87179f269050" Jan 29 16:16:19 crc kubenswrapper[4895]: I0129 16:16:19.522133 4895 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:16:19 crc kubenswrapper[4895]: I0129 16:16:19.523115 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:16:25 crc kubenswrapper[4895]: I0129 16:16:25.571396 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 16:16:25 crc kubenswrapper[4895]: I0129 16:16:25.924450 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 16:16:26 crc kubenswrapper[4895]: I0129 16:16:26.715276 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 16:16:26 crc kubenswrapper[4895]: I0129 16:16:26.778948 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 16:16:26 crc kubenswrapper[4895]: I0129 16:16:26.797647 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 16:16:26 crc kubenswrapper[4895]: I0129 16:16:26.907772 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.152741 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.157581 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.165051 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.236327 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.401111 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.527089 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.664506 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.706433 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.725341 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.792759 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.838127 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.893838 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.927071 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:16:27 crc kubenswrapper[4895]: I0129 16:16:27.937607 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.004618 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.119894 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.492567 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.500463 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.573261 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.575939 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.605061 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.626347 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.670206 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.690480 4895 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.740603 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.835854 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.849711 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.867281 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.910546 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.927684 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.946322 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 16:16:28 crc kubenswrapper[4895]: I0129 16:16:28.956762 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.014167 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.017393 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.043325 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.050057 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.079368 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.189487 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.232789 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.285224 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.294236 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.336198 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.347462 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.373902 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.384173 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.522250 4895 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.522366 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.522440 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.523450 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"766efcacc3dcae6b6f922b8cda086bfb166fbca341394fe9e51d7859efbbbd6d"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.523585 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://766efcacc3dcae6b6f922b8cda086bfb166fbca341394fe9e51d7859efbbbd6d" gracePeriod=30 Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.605649 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.650713 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.856474 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.917694 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.957713 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.964418 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 16:16:29 crc kubenswrapper[4895]: I0129 16:16:29.996930 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.078676 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.201460 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.280569 4895 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.327243 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.378105 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.421708 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.451325 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.464257 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.504930 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.535650 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.542006 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.903232 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.954824 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 16:16:30 crc kubenswrapper[4895]: I0129 16:16:30.965966 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.005535 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.066579 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.093363 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.098594 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.127461 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.143206 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.149719 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.169080 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.294591 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.390923 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.437163 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.440992 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.461671 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.615931 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.623920 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.662765 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.742915 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.757425 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.780353 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.813009 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.833926 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.920314 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.981933 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 16:16:31 crc kubenswrapper[4895]: I0129 16:16:31.985387 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.023635 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.129537 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.140104 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.174790 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.201461 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.304500 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.381611 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.401933 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.443108 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.463691 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.548440 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.555089 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.603856 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.612418 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.621515 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.660541 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.714853 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.740684 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.803998 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.903063 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.928706 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.942233 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 16:16:32 crc kubenswrapper[4895]: I0129 16:16:32.951443 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.137444 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.154593 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.171376 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.266655 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.284785 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.289141 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.316251 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.346660 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.375412 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.589327 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.604409 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.635096 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.676888 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.683787 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.714559 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.739930 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.762842 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.791800 4895 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.828185 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.872326 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.880775 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 16:16:33 crc kubenswrapper[4895]: I0129 16:16:33.977020 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.021761 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.029234 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.040333 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.205921 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.354833 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.436259 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.459783 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.526091 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.552019 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.589191 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.742321 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.828463 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.856583 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 16:16:34 crc kubenswrapper[4895]: I0129 16:16:34.872265 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.088987 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.105746 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.177839 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.182262 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.247460 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.249328 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.297592 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.311407 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.325764 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.360497 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.521445 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.531107 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.567456 4895 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.576048 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p8lq7","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.576143 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.576174 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4slsz","openshift-marketplace/redhat-operators-zdds6","openshift-marketplace/certified-operators-4tj74","openshift-marketplace/marketplace-operator-79b997595-j8jdx","openshift-marketplace/redhat-marketplace-m467m","openshift-marketplace/redhat-operators-z9nrd"] Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.576565 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m467m" podUID="8c405329-c382-44a2-8c9d-74976164f122" containerName="registry-server" containerID="cri-o://5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f" gracePeriod=30 Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.577145 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" podUID="249177f9-7b1e-4d39-a400-e625862f53c3" containerName="marketplace-operator" containerID="cri-o://fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4" gracePeriod=30 Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.577433 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4slsz" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerName="registry-server" containerID="cri-o://4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b" gracePeriod=30 Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.577746 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zdds6" podUID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" containerName="registry-server" containerID="cri-o://d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d" gracePeriod=30 Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.578340 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4tj74" podUID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" containerName="registry-server" containerID="cri-o://80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281" gracePeriod=30 Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.578789 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z9nrd" podUID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" containerName="registry-server" containerID="cri-o://a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da" gracePeriod=2 Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.585554 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.605448 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.666652 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.66662787 podStartE2EDuration="20.66662787s" podCreationTimestamp="2026-01-29 16:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:16:35.62571159 +0000 UTC m=+279.428688874" watchObservedRunningTime="2026-01-29 16:16:35.66662787 +0000 UTC m=+279.469605134" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.735329 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.749330 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.777994 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.945897 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.950315 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.965814 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 16:16:35 crc kubenswrapper[4895]: E0129 16:16:35.966028 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b is running failed: container process not found" containerID="4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:16:35 crc kubenswrapper[4895]: E0129 16:16:35.966542 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b is running failed: container process not found" containerID="4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:16:35 crc kubenswrapper[4895]: E0129 16:16:35.967036 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b is running failed: container process not found" containerID="4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:16:35 crc kubenswrapper[4895]: E0129 16:16:35.967103 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-4slsz" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerName="registry-server" Jan 29 16:16:35 crc kubenswrapper[4895]: I0129 16:16:35.989804 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.054834 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.055310 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.116942 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.138185 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.159593 4895 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.181456 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.208322 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.214136 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.218838 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.221903 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-utilities\") pod \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.221973 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv8c6\" (UniqueName: \"kubernetes.io/projected/adf371b3-ba58-4be0-a05c-b88d01ffc60d-kube-api-access-fv8c6\") pod \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.221998 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-catalog-content\") pod \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\" (UID: \"adf371b3-ba58-4be0-a05c-b88d01ffc60d\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.222945 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-utilities" (OuterVolumeSpecName: "utilities") pod "adf371b3-ba58-4be0-a05c-b88d01ffc60d" (UID: "adf371b3-ba58-4be0-a05c-b88d01ffc60d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.223739 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.231529 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf371b3-ba58-4be0-a05c-b88d01ffc60d-kube-api-access-fv8c6" (OuterVolumeSpecName: "kube-api-access-fv8c6") pod "adf371b3-ba58-4be0-a05c-b88d01ffc60d" (UID: "adf371b3-ba58-4be0-a05c-b88d01ffc60d"). InnerVolumeSpecName "kube-api-access-fv8c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.236202 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.281236 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323354 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-catalog-content\") pod \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323418 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-utilities\") pod \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323447 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtmch\" (UniqueName: \"kubernetes.io/projected/06277e29-59af-446a-81a3-b3b8b1b5ab0a-kube-api-access-rtmch\") pod \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323475 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-utilities\") pod \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323496 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9lrm\" (UniqueName: \"kubernetes.io/projected/044df5fd-0d96-4aab-b09e-24870d0e4bd9-kube-api-access-x9lrm\") pod \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323518 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-catalog-content\") pod \"8c405329-c382-44a2-8c9d-74976164f122\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323562 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-utilities\") pod \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323580 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-catalog-content\") pod \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\" (UID: \"06277e29-59af-446a-81a3-b3b8b1b5ab0a\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l2zq\" (UniqueName: \"kubernetes.io/projected/8c405329-c382-44a2-8c9d-74976164f122-kube-api-access-8l2zq\") pod \"8c405329-c382-44a2-8c9d-74976164f122\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323681 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgxw5\" (UniqueName: \"kubernetes.io/projected/249177f9-7b1e-4d39-a400-e625862f53c3-kube-api-access-kgxw5\") pod \"249177f9-7b1e-4d39-a400-e625862f53c3\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323708 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-catalog-content\") pod \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\" (UID: \"044df5fd-0d96-4aab-b09e-24870d0e4bd9\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323749 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5ts9\" (UniqueName: \"kubernetes.io/projected/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-kube-api-access-b5ts9\") pod \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\" (UID: \"f5ead4e3-bcde-4c47-8173-9f7773f0a45f\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323778 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-utilities\") pod \"8c405329-c382-44a2-8c9d-74976164f122\" (UID: \"8c405329-c382-44a2-8c9d-74976164f122\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323801 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-operator-metrics\") pod \"249177f9-7b1e-4d39-a400-e625862f53c3\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.323829 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-trusted-ca\") pod \"249177f9-7b1e-4d39-a400-e625862f53c3\" (UID: \"249177f9-7b1e-4d39-a400-e625862f53c3\") " Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.324131 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.324153 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv8c6\" (UniqueName: \"kubernetes.io/projected/adf371b3-ba58-4be0-a05c-b88d01ffc60d-kube-api-access-fv8c6\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.324557 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-utilities" (OuterVolumeSpecName: "utilities") pod "f5ead4e3-bcde-4c47-8173-9f7773f0a45f" (UID: "f5ead4e3-bcde-4c47-8173-9f7773f0a45f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.324572 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-utilities" (OuterVolumeSpecName: "utilities") pod "06277e29-59af-446a-81a3-b3b8b1b5ab0a" (UID: "06277e29-59af-446a-81a3-b3b8b1b5ab0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.327038 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-utilities" (OuterVolumeSpecName: "utilities") pod "044df5fd-0d96-4aab-b09e-24870d0e4bd9" (UID: "044df5fd-0d96-4aab-b09e-24870d0e4bd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.327367 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-utilities" (OuterVolumeSpecName: "utilities") pod "8c405329-c382-44a2-8c9d-74976164f122" (UID: "8c405329-c382-44a2-8c9d-74976164f122"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.327524 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "249177f9-7b1e-4d39-a400-e625862f53c3" (UID: "249177f9-7b1e-4d39-a400-e625862f53c3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.336142 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c405329-c382-44a2-8c9d-74976164f122-kube-api-access-8l2zq" (OuterVolumeSpecName: "kube-api-access-8l2zq") pod "8c405329-c382-44a2-8c9d-74976164f122" (UID: "8c405329-c382-44a2-8c9d-74976164f122"). InnerVolumeSpecName "kube-api-access-8l2zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.336222 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/044df5fd-0d96-4aab-b09e-24870d0e4bd9-kube-api-access-x9lrm" (OuterVolumeSpecName: "kube-api-access-x9lrm") pod "044df5fd-0d96-4aab-b09e-24870d0e4bd9" (UID: "044df5fd-0d96-4aab-b09e-24870d0e4bd9"). InnerVolumeSpecName "kube-api-access-x9lrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.337783 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/249177f9-7b1e-4d39-a400-e625862f53c3-kube-api-access-kgxw5" (OuterVolumeSpecName: "kube-api-access-kgxw5") pod "249177f9-7b1e-4d39-a400-e625862f53c3" (UID: "249177f9-7b1e-4d39-a400-e625862f53c3"). InnerVolumeSpecName "kube-api-access-kgxw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.338401 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06277e29-59af-446a-81a3-b3b8b1b5ab0a-kube-api-access-rtmch" (OuterVolumeSpecName: "kube-api-access-rtmch") pod "06277e29-59af-446a-81a3-b3b8b1b5ab0a" (UID: "06277e29-59af-446a-81a3-b3b8b1b5ab0a"). InnerVolumeSpecName "kube-api-access-rtmch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.338814 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "249177f9-7b1e-4d39-a400-e625862f53c3" (UID: "249177f9-7b1e-4d39-a400-e625862f53c3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.340253 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-kube-api-access-b5ts9" (OuterVolumeSpecName: "kube-api-access-b5ts9") pod "f5ead4e3-bcde-4c47-8173-9f7773f0a45f" (UID: "f5ead4e3-bcde-4c47-8173-9f7773f0a45f"). InnerVolumeSpecName "kube-api-access-b5ts9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.350081 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adf371b3-ba58-4be0-a05c-b88d01ffc60d" (UID: "adf371b3-ba58-4be0-a05c-b88d01ffc60d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.359138 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c405329-c382-44a2-8c9d-74976164f122" (UID: "8c405329-c382-44a2-8c9d-74976164f122"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.370894 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.395695 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "044df5fd-0d96-4aab-b09e-24870d0e4bd9" (UID: "044df5fd-0d96-4aab-b09e-24870d0e4bd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.415264 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5ead4e3-bcde-4c47-8173-9f7773f0a45f" (UID: "f5ead4e3-bcde-4c47-8173-9f7773f0a45f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.425816 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.425890 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.425902 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtmch\" (UniqueName: \"kubernetes.io/projected/06277e29-59af-446a-81a3-b3b8b1b5ab0a-kube-api-access-rtmch\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.425915 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.425924 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9lrm\" (UniqueName: \"kubernetes.io/projected/044df5fd-0d96-4aab-b09e-24870d0e4bd9-kube-api-access-x9lrm\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.425934 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.425962 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.425972 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l2zq\" (UniqueName: \"kubernetes.io/projected/8c405329-c382-44a2-8c9d-74976164f122-kube-api-access-8l2zq\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.425983 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf371b3-ba58-4be0-a05c-b88d01ffc60d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.425995 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgxw5\" (UniqueName: \"kubernetes.io/projected/249177f9-7b1e-4d39-a400-e625862f53c3-kube-api-access-kgxw5\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.426003 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044df5fd-0d96-4aab-b09e-24870d0e4bd9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.426013 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5ts9\" (UniqueName: \"kubernetes.io/projected/f5ead4e3-bcde-4c47-8173-9f7773f0a45f-kube-api-access-b5ts9\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.426021 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c405329-c382-44a2-8c9d-74976164f122-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.426047 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.426093 4895 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/249177f9-7b1e-4d39-a400-e625862f53c3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.461056 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06277e29-59af-446a-81a3-b3b8b1b5ab0a" (UID: "06277e29-59af-446a-81a3-b3b8b1b5ab0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.522132 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.527779 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06277e29-59af-446a-81a3-b3b8b1b5ab0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.528520 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.533619 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.538760 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.556408 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.612136 4895 generic.go:334] "Generic (PLEG): container finished" podID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" containerID="d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d" exitCode=0 Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.612234 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdds6" event={"ID":"adf371b3-ba58-4be0-a05c-b88d01ffc60d","Type":"ContainerDied","Data":"d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.612270 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdds6" event={"ID":"adf371b3-ba58-4be0-a05c-b88d01ffc60d","Type":"ContainerDied","Data":"f45d82106559d81e5e7bfff8000ec7dc8c4820220a6d11b4186e6842da8476b5"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.612291 4895 scope.go:117] "RemoveContainer" containerID="d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.612239 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdds6" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.616935 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.620306 4895 generic.go:334] "Generic (PLEG): container finished" podID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerID="4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b" exitCode=0 Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.620393 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4slsz" event={"ID":"f5ead4e3-bcde-4c47-8173-9f7773f0a45f","Type":"ContainerDied","Data":"4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.620525 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4slsz" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.620744 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4slsz" event={"ID":"f5ead4e3-bcde-4c47-8173-9f7773f0a45f","Type":"ContainerDied","Data":"4b099004196c328a2f02bedee55265d20bdfd31530d92b7b533c9b69b81120b8"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.624910 4895 generic.go:334] "Generic (PLEG): container finished" podID="249177f9-7b1e-4d39-a400-e625862f53c3" containerID="fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4" exitCode=0 Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.624984 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" event={"ID":"249177f9-7b1e-4d39-a400-e625862f53c3","Type":"ContainerDied","Data":"fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.625009 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" event={"ID":"249177f9-7b1e-4d39-a400-e625862f53c3","Type":"ContainerDied","Data":"f7ece3abb26d25a5f2ce425e121af6f6042a6869c4e32dfd071ee7bda9b270ba"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.625093 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j8jdx" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.626538 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.629252 4895 generic.go:334] "Generic (PLEG): container finished" podID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" containerID="a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da" exitCode=0 Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.629331 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9nrd" event={"ID":"06277e29-59af-446a-81a3-b3b8b1b5ab0a","Type":"ContainerDied","Data":"a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.629361 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9nrd" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.629388 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9nrd" event={"ID":"06277e29-59af-446a-81a3-b3b8b1b5ab0a","Type":"ContainerDied","Data":"154e4c74e520a280f0710fb035e574040131fa56140bf9e8ccaf9f5fa70db0b1"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.634841 4895 generic.go:334] "Generic (PLEG): container finished" podID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" containerID="80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281" exitCode=0 Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.634909 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tj74" event={"ID":"044df5fd-0d96-4aab-b09e-24870d0e4bd9","Type":"ContainerDied","Data":"80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.634926 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tj74" event={"ID":"044df5fd-0d96-4aab-b09e-24870d0e4bd9","Type":"ContainerDied","Data":"76567b2220c54295fe508af45a7c9c5936e539dde4b78ad517c6389e8fced071"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.634993 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tj74" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.644213 4895 scope.go:117] "RemoveContainer" containerID="86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.651516 4895 generic.go:334] "Generic (PLEG): container finished" podID="8c405329-c382-44a2-8c9d-74976164f122" containerID="5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f" exitCode=0 Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.651551 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m467m" event={"ID":"8c405329-c382-44a2-8c9d-74976164f122","Type":"ContainerDied","Data":"5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.651573 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m467m" event={"ID":"8c405329-c382-44a2-8c9d-74976164f122","Type":"ContainerDied","Data":"ddd0101747a64270f1f63b0d77df7619ace8236cdeea96710fd6efe814103503"} Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.651633 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m467m" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.663938 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.671807 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.677268 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdds6"] Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.680230 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zdds6"] Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.703402 4895 scope.go:117] "RemoveContainer" containerID="4d156a5f67fd0630f5f9db3e3f4c82a9c71a813c0c997131eab9cb585b44f237" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.719780 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.773326 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.784258 4895 scope.go:117] "RemoveContainer" containerID="d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d" Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.784767 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d\": container with ID starting with d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d not found: ID does not exist" containerID="d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.784817 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d"} err="failed to get container status \"d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d\": rpc error: code = NotFound desc = could not find container \"d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d\": container with ID starting with d99b46dc9b279bff785aef5bc67ab5a2cbac35166e52af91706139f9c335792d not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.784855 4895 scope.go:117] "RemoveContainer" containerID="86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c" Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.786089 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c\": container with ID starting with 86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c not found: ID does not exist" containerID="86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.786135 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c"} err="failed to get container status \"86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c\": rpc error: code = NotFound desc = could not find container \"86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c\": container with ID starting with 86af98a4f397d398c5d87c0a4126d134af2b0c710a7d458592d5f290d1cf7d8c not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.786168 4895 scope.go:117] "RemoveContainer" containerID="4d156a5f67fd0630f5f9db3e3f4c82a9c71a813c0c997131eab9cb585b44f237" Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.786700 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d156a5f67fd0630f5f9db3e3f4c82a9c71a813c0c997131eab9cb585b44f237\": container with ID starting with 4d156a5f67fd0630f5f9db3e3f4c82a9c71a813c0c997131eab9cb585b44f237 not found: ID does not exist" containerID="4d156a5f67fd0630f5f9db3e3f4c82a9c71a813c0c997131eab9cb585b44f237" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.786783 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d156a5f67fd0630f5f9db3e3f4c82a9c71a813c0c997131eab9cb585b44f237"} err="failed to get container status \"4d156a5f67fd0630f5f9db3e3f4c82a9c71a813c0c997131eab9cb585b44f237\": rpc error: code = NotFound desc = could not find container \"4d156a5f67fd0630f5f9db3e3f4c82a9c71a813c0c997131eab9cb585b44f237\": container with ID starting with 4d156a5f67fd0630f5f9db3e3f4c82a9c71a813c0c997131eab9cb585b44f237 not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.786843 4895 scope.go:117] "RemoveContainer" containerID="4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.787851 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4tj74"] Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.798643 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4tj74"] Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.804694 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9nrd"] Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.808876 4895 scope.go:117] "RemoveContainer" containerID="5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.813282 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z9nrd"] Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.819110 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j8jdx"] Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.829942 4895 scope.go:117] "RemoveContainer" containerID="d6278f9edfea55e7260d3fb076bff45470b2c32f849f1c63f501f76b9a913240" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.838425 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.844089 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.856722 4895 scope.go:117] "RemoveContainer" containerID="4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b" Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.857429 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b\": container with ID starting with 4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b not found: ID does not exist" containerID="4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.857470 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b"} err="failed to get container status \"4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b\": rpc error: code = NotFound desc = could not find container \"4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b\": container with ID starting with 4df29437155ad06ad2a4da2cb475df31da3aed01334b65047feb870bb7547b4b not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.857501 4895 scope.go:117] "RemoveContainer" containerID="5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.857995 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j8jdx"] Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.858571 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a\": container with ID starting with 5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a not found: ID does not exist" containerID="5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.858629 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a"} err="failed to get container status \"5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a\": rpc error: code = NotFound desc = could not find container \"5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a\": container with ID starting with 5351b2551be4c700fb675eeb8334423b1cacf0b5b99bfa91ee42e82301a12a9a not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.858665 4895 scope.go:117] "RemoveContainer" containerID="d6278f9edfea55e7260d3fb076bff45470b2c32f849f1c63f501f76b9a913240" Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.859158 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6278f9edfea55e7260d3fb076bff45470b2c32f849f1c63f501f76b9a913240\": container with ID starting with d6278f9edfea55e7260d3fb076bff45470b2c32f849f1c63f501f76b9a913240 not found: ID does not exist" containerID="d6278f9edfea55e7260d3fb076bff45470b2c32f849f1c63f501f76b9a913240" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.859212 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6278f9edfea55e7260d3fb076bff45470b2c32f849f1c63f501f76b9a913240"} err="failed to get container status \"d6278f9edfea55e7260d3fb076bff45470b2c32f849f1c63f501f76b9a913240\": rpc error: code = NotFound desc = could not find container \"d6278f9edfea55e7260d3fb076bff45470b2c32f849f1c63f501f76b9a913240\": container with ID starting with d6278f9edfea55e7260d3fb076bff45470b2c32f849f1c63f501f76b9a913240 not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.859248 4895 scope.go:117] "RemoveContainer" containerID="fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.862768 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4slsz"] Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.866280 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4slsz"] Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.870287 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.870605 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m467m"] Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.872730 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m467m"] Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.877640 4895 scope.go:117] "RemoveContainer" containerID="fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4" Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.878290 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4\": container with ID starting with fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4 not found: ID does not exist" containerID="fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.878370 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4"} err="failed to get container status \"fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4\": rpc error: code = NotFound desc = could not find container \"fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4\": container with ID starting with fed9a1f8ccfc9c03a68e3402f2e5f77e387db88d8d0b17e61487ff4f726609f4 not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.878412 4895 scope.go:117] "RemoveContainer" containerID="a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.894064 4895 scope.go:117] "RemoveContainer" containerID="9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.920970 4895 scope.go:117] "RemoveContainer" containerID="4dba8f18612d6e0b2a5e6a854070b42fcf7aa28f4fed57e21a458dab3cd748da" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.923172 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.943136 4895 scope.go:117] "RemoveContainer" containerID="a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da" Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.943616 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da\": container with ID starting with a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da not found: ID does not exist" containerID="a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.943770 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da"} err="failed to get container status \"a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da\": rpc error: code = NotFound desc = could not find container \"a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da\": container with ID starting with a41b5bbb578ff93c14b138515fa72a50961f890da010250406ed949374cf03da not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.943886 4895 scope.go:117] "RemoveContainer" containerID="9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7" Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.944330 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7\": container with ID starting with 9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7 not found: ID does not exist" containerID="9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.944422 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7"} err="failed to get container status \"9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7\": rpc error: code = NotFound desc = could not find container \"9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7\": container with ID starting with 9aa3efcdde90a11a92a6180c888019373772b61aa1e0e31e5dad8017970022b7 not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.944492 4895 scope.go:117] "RemoveContainer" containerID="4dba8f18612d6e0b2a5e6a854070b42fcf7aa28f4fed57e21a458dab3cd748da" Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.944796 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dba8f18612d6e0b2a5e6a854070b42fcf7aa28f4fed57e21a458dab3cd748da\": container with ID starting with 4dba8f18612d6e0b2a5e6a854070b42fcf7aa28f4fed57e21a458dab3cd748da not found: ID does not exist" containerID="4dba8f18612d6e0b2a5e6a854070b42fcf7aa28f4fed57e21a458dab3cd748da" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.944934 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dba8f18612d6e0b2a5e6a854070b42fcf7aa28f4fed57e21a458dab3cd748da"} err="failed to get container status \"4dba8f18612d6e0b2a5e6a854070b42fcf7aa28f4fed57e21a458dab3cd748da\": rpc error: code = NotFound desc = could not find container \"4dba8f18612d6e0b2a5e6a854070b42fcf7aa28f4fed57e21a458dab3cd748da\": container with ID starting with 4dba8f18612d6e0b2a5e6a854070b42fcf7aa28f4fed57e21a458dab3cd748da not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.945028 4895 scope.go:117] "RemoveContainer" containerID="80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.961649 4895 scope.go:117] "RemoveContainer" containerID="daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.982751 4895 scope.go:117] "RemoveContainer" containerID="57e0597f2d75f437f6c1ae015335c7016f5fe859f5c43d46364211115f386101" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.998526 4895 scope.go:117] "RemoveContainer" containerID="80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281" Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.999163 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281\": container with ID starting with 80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281 not found: ID does not exist" containerID="80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.999237 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281"} err="failed to get container status \"80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281\": rpc error: code = NotFound desc = could not find container \"80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281\": container with ID starting with 80472bb7a7c5a6208c9401725a7efcee002f241e5da24c0df53bcf59f393d281 not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.999289 4895 scope.go:117] "RemoveContainer" containerID="daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2" Jan 29 16:16:36 crc kubenswrapper[4895]: E0129 16:16:36.999908 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2\": container with ID starting with daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2 not found: ID does not exist" containerID="daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.999950 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2"} err="failed to get container status \"daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2\": rpc error: code = NotFound desc = could not find container \"daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2\": container with ID starting with daab7a0e703486405c7a24d8ee339adb29192010f8551230c398ca171d75eeb2 not found: ID does not exist" Jan 29 16:16:36 crc kubenswrapper[4895]: I0129 16:16:36.999985 4895 scope.go:117] "RemoveContainer" containerID="57e0597f2d75f437f6c1ae015335c7016f5fe859f5c43d46364211115f386101" Jan 29 16:16:37 crc kubenswrapper[4895]: E0129 16:16:37.000264 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57e0597f2d75f437f6c1ae015335c7016f5fe859f5c43d46364211115f386101\": container with ID starting with 57e0597f2d75f437f6c1ae015335c7016f5fe859f5c43d46364211115f386101 not found: ID does not exist" containerID="57e0597f2d75f437f6c1ae015335c7016f5fe859f5c43d46364211115f386101" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.000421 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e0597f2d75f437f6c1ae015335c7016f5fe859f5c43d46364211115f386101"} err="failed to get container status \"57e0597f2d75f437f6c1ae015335c7016f5fe859f5c43d46364211115f386101\": rpc error: code = NotFound desc = could not find container \"57e0597f2d75f437f6c1ae015335c7016f5fe859f5c43d46364211115f386101\": container with ID starting with 57e0597f2d75f437f6c1ae015335c7016f5fe859f5c43d46364211115f386101 not found: ID does not exist" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.000638 4895 scope.go:117] "RemoveContainer" containerID="5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.015366 4895 scope.go:117] "RemoveContainer" containerID="c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.029380 4895 scope.go:117] "RemoveContainer" containerID="1db9aaf91c52d48aebc8de689190f4736601af9f04b5870a7a133e722a4b6137" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.045220 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" path="/var/lib/kubelet/pods/044df5fd-0d96-4aab-b09e-24870d0e4bd9/volumes" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.047407 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" path="/var/lib/kubelet/pods/06277e29-59af-446a-81a3-b3b8b1b5ab0a/volumes" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.049142 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="249177f9-7b1e-4d39-a400-e625862f53c3" path="/var/lib/kubelet/pods/249177f9-7b1e-4d39-a400-e625862f53c3/volumes" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.049855 4895 scope.go:117] "RemoveContainer" containerID="5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f" Jan 29 16:16:37 crc kubenswrapper[4895]: E0129 16:16:37.050788 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f\": container with ID starting with 5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f not found: ID does not exist" containerID="5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.050820 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" path="/var/lib/kubelet/pods/408c9cd8-1d91-4a4b-9e57-748578b4704e/volumes" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.050856 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f"} err="failed to get container status \"5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f\": rpc error: code = NotFound desc = could not find container \"5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f\": container with ID starting with 5e63d113cc5548a296c3b29a05ef2ec6484255170767cb71b9a8b3c46d34500f not found: ID does not exist" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.050923 4895 scope.go:117] "RemoveContainer" containerID="c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9" Jan 29 16:16:37 crc kubenswrapper[4895]: E0129 16:16:37.051315 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9\": container with ID starting with c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9 not found: ID does not exist" containerID="c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.051385 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9"} err="failed to get container status \"c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9\": rpc error: code = NotFound desc = could not find container \"c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9\": container with ID starting with c57e5ac2c197dc9ee67208ae2ccd6451e21dc44c1f4787373a40dc3e544958a9 not found: ID does not exist" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.051426 4895 scope.go:117] "RemoveContainer" containerID="1db9aaf91c52d48aebc8de689190f4736601af9f04b5870a7a133e722a4b6137" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.051566 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c405329-c382-44a2-8c9d-74976164f122" path="/var/lib/kubelet/pods/8c405329-c382-44a2-8c9d-74976164f122/volumes" Jan 29 16:16:37 crc kubenswrapper[4895]: E0129 16:16:37.051823 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db9aaf91c52d48aebc8de689190f4736601af9f04b5870a7a133e722a4b6137\": container with ID starting with 1db9aaf91c52d48aebc8de689190f4736601af9f04b5870a7a133e722a4b6137 not found: ID does not exist" containerID="1db9aaf91c52d48aebc8de689190f4736601af9f04b5870a7a133e722a4b6137" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.051848 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db9aaf91c52d48aebc8de689190f4736601af9f04b5870a7a133e722a4b6137"} err="failed to get container status \"1db9aaf91c52d48aebc8de689190f4736601af9f04b5870a7a133e722a4b6137\": rpc error: code = NotFound desc = could not find container \"1db9aaf91c52d48aebc8de689190f4736601af9f04b5870a7a133e722a4b6137\": container with ID starting with 1db9aaf91c52d48aebc8de689190f4736601af9f04b5870a7a133e722a4b6137 not found: ID does not exist" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.052796 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" path="/var/lib/kubelet/pods/adf371b3-ba58-4be0-a05c-b88d01ffc60d/volumes" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.053472 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" path="/var/lib/kubelet/pods/f5ead4e3-bcde-4c47-8173-9f7773f0a45f/volumes" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.054280 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.066713 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.076857 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.218903 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.224175 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.243464 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.305551 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.403922 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.442969 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.554310 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.696198 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.716675 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.747625 4895 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.748067 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad" gracePeriod=5 Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.810465 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 16:16:37 crc kubenswrapper[4895]: I0129 16:16:37.891202 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 16:16:38 crc kubenswrapper[4895]: I0129 16:16:38.046219 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 16:16:38 crc kubenswrapper[4895]: I0129 16:16:38.055781 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 16:16:38 crc kubenswrapper[4895]: I0129 16:16:38.122032 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:16:38 crc kubenswrapper[4895]: I0129 16:16:38.135987 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 16:16:38 crc kubenswrapper[4895]: I0129 16:16:38.255613 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 16:16:38 crc kubenswrapper[4895]: I0129 16:16:38.435850 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 16:16:38 crc kubenswrapper[4895]: I0129 16:16:38.532362 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 16:16:38 crc kubenswrapper[4895]: I0129 16:16:38.931756 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.047939 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.098800 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.110741 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.196810 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.219095 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.248396 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.295197 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.322704 4895 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.469949 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.701528 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.713906 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 16:16:39 crc kubenswrapper[4895]: I0129 16:16:39.864233 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.059381 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.080536 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.142577 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.192390 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.199357 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.339776 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.486932 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.518504 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.631981 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.646541 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.657860 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 16:16:40 crc kubenswrapper[4895]: I0129 16:16:40.803493 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 16:16:41 crc kubenswrapper[4895]: I0129 16:16:41.471744 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 16:16:41 crc kubenswrapper[4895]: I0129 16:16:41.493991 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 16:16:42 crc kubenswrapper[4895]: I0129 16:16:42.023634 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 16:16:42 crc kubenswrapper[4895]: I0129 16:16:42.430174 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.321264 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.321964 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.426788 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.426859 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.426908 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.426947 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.426945 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.426980 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.426974 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.427002 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.427112 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.427806 4895 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.427825 4895 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.427837 4895 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.427848 4895 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.437152 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.529194 4895 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.709589 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.709651 4895 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad" exitCode=137 Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.709708 4895 scope.go:117] "RemoveContainer" containerID="83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.709776 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.744017 4895 scope.go:117] "RemoveContainer" containerID="83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad" Jan 29 16:16:43 crc kubenswrapper[4895]: E0129 16:16:43.744664 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad\": container with ID starting with 83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad not found: ID does not exist" containerID="83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad" Jan 29 16:16:43 crc kubenswrapper[4895]: I0129 16:16:43.744727 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad"} err="failed to get container status \"83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad\": rpc error: code = NotFound desc = could not find container \"83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad\": container with ID starting with 83aca0c1bba203ebdfed77e29e3525e4eeef57e76b9ddc90f66e894042464cad not found: ID does not exist" Jan 29 16:16:45 crc kubenswrapper[4895]: I0129 16:16:45.043891 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 16:16:56 crc kubenswrapper[4895]: I0129 16:16:56.811556 4895 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 16:16:59 crc kubenswrapper[4895]: I0129 16:16:59.825104 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 16:16:59 crc kubenswrapper[4895]: I0129 16:16:59.827353 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:16:59 crc kubenswrapper[4895]: I0129 16:16:59.827420 4895 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="766efcacc3dcae6b6f922b8cda086bfb166fbca341394fe9e51d7859efbbbd6d" exitCode=137 Jan 29 16:16:59 crc kubenswrapper[4895]: I0129 16:16:59.827474 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"766efcacc3dcae6b6f922b8cda086bfb166fbca341394fe9e51d7859efbbbd6d"} Jan 29 16:16:59 crc kubenswrapper[4895]: I0129 16:16:59.827533 4895 scope.go:117] "RemoveContainer" containerID="6685e8fa2b1a4741fc473bf63bf454807687dec4f73c93b1fca25d6915776f4e" Jan 29 16:17:00 crc kubenswrapper[4895]: I0129 16:17:00.837639 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 16:17:00 crc kubenswrapper[4895]: I0129 16:17:00.843228 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8c4a732ac930d605fff4441101d13f2f38218b25ddd623ca00d0a82014b29236"} Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.297807 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q9l2w"] Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298083 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298097 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298111 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298118 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298124 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298135 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298143 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298149 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298160 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298166 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298175 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298182 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298192 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298197 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298206 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298212 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298221 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298228 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298237 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="249177f9-7b1e-4d39-a400-e625862f53c3" containerName="marketplace-operator" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298243 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="249177f9-7b1e-4d39-a400-e625862f53c3" containerName="marketplace-operator" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298252 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c405329-c382-44a2-8c9d-74976164f122" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298259 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c405329-c382-44a2-8c9d-74976164f122" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298267 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298273 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298281 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298287 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298295 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298301 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298310 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c405329-c382-44a2-8c9d-74976164f122" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298316 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c405329-c382-44a2-8c9d-74976164f122" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298326 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" containerName="installer" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298331 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" containerName="installer" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298340 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298346 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" containerName="extract-content" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298358 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298364 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298373 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c405329-c382-44a2-8c9d-74976164f122" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298379 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c405329-c382-44a2-8c9d-74976164f122" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298386 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298394 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:17:01 crc kubenswrapper[4895]: E0129 16:17:01.298401 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298407 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" containerName="extract-utilities" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298499 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf371b3-ba58-4be0-a05c-b88d01ffc60d" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298508 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="408c9cd8-1d91-4a4b-9e57-748578b4704e" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298517 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="06277e29-59af-446a-81a3-b3b8b1b5ab0a" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298524 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cb8033-bd36-4b53-8f71-7b2d8d527270" containerName="installer" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298535 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="044df5fd-0d96-4aab-b09e-24870d0e4bd9" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298554 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c405329-c382-44a2-8c9d-74976164f122" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298560 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ead4e3-bcde-4c47-8173-9f7773f0a45f" containerName="registry-server" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298569 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.298577 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="249177f9-7b1e-4d39-a400-e625862f53c3" containerName="marketplace-operator" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.299367 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.302950 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.303510 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.303736 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.325397 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9l2w"] Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.463283 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p54q7"] Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.464911 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.468182 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.475594 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p54q7"] Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.484234 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2765471a-1d69-49cb-8d07-753b572fe408-utilities\") pod \"certified-operators-p54q7\" (UID: \"2765471a-1d69-49cb-8d07-753b572fe408\") " pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.484285 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz744\" (UniqueName: \"kubernetes.io/projected/a3b60df4-65e6-407a-b3ed-997271ae68b7-kube-api-access-sz744\") pod \"redhat-marketplace-q9l2w\" (UID: \"a3b60df4-65e6-407a-b3ed-997271ae68b7\") " pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.484440 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b60df4-65e6-407a-b3ed-997271ae68b7-catalog-content\") pod \"redhat-marketplace-q9l2w\" (UID: \"a3b60df4-65e6-407a-b3ed-997271ae68b7\") " pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.484509 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2765471a-1d69-49cb-8d07-753b572fe408-catalog-content\") pod \"certified-operators-p54q7\" (UID: \"2765471a-1d69-49cb-8d07-753b572fe408\") " pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.484627 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b60df4-65e6-407a-b3ed-997271ae68b7-utilities\") pod \"redhat-marketplace-q9l2w\" (UID: \"a3b60df4-65e6-407a-b3ed-997271ae68b7\") " pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.484720 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2g8j\" (UniqueName: \"kubernetes.io/projected/2765471a-1d69-49cb-8d07-753b572fe408-kube-api-access-t2g8j\") pod \"certified-operators-p54q7\" (UID: \"2765471a-1d69-49cb-8d07-753b572fe408\") " pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.586569 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2765471a-1d69-49cb-8d07-753b572fe408-catalog-content\") pod \"certified-operators-p54q7\" (UID: \"2765471a-1d69-49cb-8d07-753b572fe408\") " pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.586706 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b60df4-65e6-407a-b3ed-997271ae68b7-utilities\") pod \"redhat-marketplace-q9l2w\" (UID: \"a3b60df4-65e6-407a-b3ed-997271ae68b7\") " pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.586753 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2g8j\" (UniqueName: \"kubernetes.io/projected/2765471a-1d69-49cb-8d07-753b572fe408-kube-api-access-t2g8j\") pod \"certified-operators-p54q7\" (UID: \"2765471a-1d69-49cb-8d07-753b572fe408\") " pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.586801 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2765471a-1d69-49cb-8d07-753b572fe408-utilities\") pod \"certified-operators-p54q7\" (UID: \"2765471a-1d69-49cb-8d07-753b572fe408\") " pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.586846 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz744\" (UniqueName: \"kubernetes.io/projected/a3b60df4-65e6-407a-b3ed-997271ae68b7-kube-api-access-sz744\") pod \"redhat-marketplace-q9l2w\" (UID: \"a3b60df4-65e6-407a-b3ed-997271ae68b7\") " pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.586940 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b60df4-65e6-407a-b3ed-997271ae68b7-catalog-content\") pod \"redhat-marketplace-q9l2w\" (UID: \"a3b60df4-65e6-407a-b3ed-997271ae68b7\") " pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.587228 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2765471a-1d69-49cb-8d07-753b572fe408-catalog-content\") pod \"certified-operators-p54q7\" (UID: \"2765471a-1d69-49cb-8d07-753b572fe408\") " pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.587438 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2765471a-1d69-49cb-8d07-753b572fe408-utilities\") pod \"certified-operators-p54q7\" (UID: \"2765471a-1d69-49cb-8d07-753b572fe408\") " pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.587716 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b60df4-65e6-407a-b3ed-997271ae68b7-utilities\") pod \"redhat-marketplace-q9l2w\" (UID: \"a3b60df4-65e6-407a-b3ed-997271ae68b7\") " pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.588025 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b60df4-65e6-407a-b3ed-997271ae68b7-catalog-content\") pod \"redhat-marketplace-q9l2w\" (UID: \"a3b60df4-65e6-407a-b3ed-997271ae68b7\") " pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.611332 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2g8j\" (UniqueName: \"kubernetes.io/projected/2765471a-1d69-49cb-8d07-753b572fe408-kube-api-access-t2g8j\") pod \"certified-operators-p54q7\" (UID: \"2765471a-1d69-49cb-8d07-753b572fe408\") " pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.614646 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz744\" (UniqueName: \"kubernetes.io/projected/a3b60df4-65e6-407a-b3ed-997271ae68b7-kube-api-access-sz744\") pod \"redhat-marketplace-q9l2w\" (UID: \"a3b60df4-65e6-407a-b3ed-997271ae68b7\") " pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.620762 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.786039 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.908727 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q9l2w"] Jan 29 16:17:01 crc kubenswrapper[4895]: W0129 16:17:01.909734 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3b60df4_65e6_407a_b3ed_997271ae68b7.slice/crio-bee0a0a18114733fac2e05504af126933161cff9b2c132c6e905ef29b8cbf0a5 WatchSource:0}: Error finding container bee0a0a18114733fac2e05504af126933161cff9b2c132c6e905ef29b8cbf0a5: Status 404 returned error can't find the container with id bee0a0a18114733fac2e05504af126933161cff9b2c132c6e905ef29b8cbf0a5 Jan 29 16:17:01 crc kubenswrapper[4895]: I0129 16:17:01.999058 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p54q7"] Jan 29 16:17:02 crc kubenswrapper[4895]: I0129 16:17:02.252313 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:17:02 crc kubenswrapper[4895]: I0129 16:17:02.859625 4895 generic.go:334] "Generic (PLEG): container finished" podID="2765471a-1d69-49cb-8d07-753b572fe408" containerID="ed38c97c55af3e41c9db156d94588e3e645a5a9dcfe74617c681c28eab1d0a35" exitCode=0 Jan 29 16:17:02 crc kubenswrapper[4895]: I0129 16:17:02.859710 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p54q7" event={"ID":"2765471a-1d69-49cb-8d07-753b572fe408","Type":"ContainerDied","Data":"ed38c97c55af3e41c9db156d94588e3e645a5a9dcfe74617c681c28eab1d0a35"} Jan 29 16:17:02 crc kubenswrapper[4895]: I0129 16:17:02.859746 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p54q7" event={"ID":"2765471a-1d69-49cb-8d07-753b572fe408","Type":"ContainerStarted","Data":"c16af28ca01885894d342d5522c01afdacc9ecc211367fdf7571ce5798122e19"} Jan 29 16:17:02 crc kubenswrapper[4895]: I0129 16:17:02.862030 4895 generic.go:334] "Generic (PLEG): container finished" podID="a3b60df4-65e6-407a-b3ed-997271ae68b7" containerID="ffb285e5e84b0425f2292ec61dbac62692a270872e5f361ea7571d2051a10bf5" exitCode=0 Jan 29 16:17:02 crc kubenswrapper[4895]: I0129 16:17:02.862064 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9l2w" event={"ID":"a3b60df4-65e6-407a-b3ed-997271ae68b7","Type":"ContainerDied","Data":"ffb285e5e84b0425f2292ec61dbac62692a270872e5f361ea7571d2051a10bf5"} Jan 29 16:17:02 crc kubenswrapper[4895]: I0129 16:17:02.862095 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9l2w" event={"ID":"a3b60df4-65e6-407a-b3ed-997271ae68b7","Type":"ContainerStarted","Data":"bee0a0a18114733fac2e05504af126933161cff9b2c132c6e905ef29b8cbf0a5"} Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.660092 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q42rh"] Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.661391 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.668949 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.675668 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q42rh"] Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.732911 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz566\" (UniqueName: \"kubernetes.io/projected/e3654e61-241e-4ea7-9b75-7f135d437ed5-kube-api-access-tz566\") pod \"redhat-operators-q42rh\" (UID: \"e3654e61-241e-4ea7-9b75-7f135d437ed5\") " pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.732959 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3654e61-241e-4ea7-9b75-7f135d437ed5-catalog-content\") pod \"redhat-operators-q42rh\" (UID: \"e3654e61-241e-4ea7-9b75-7f135d437ed5\") " pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.732988 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3654e61-241e-4ea7-9b75-7f135d437ed5-utilities\") pod \"redhat-operators-q42rh\" (UID: \"e3654e61-241e-4ea7-9b75-7f135d437ed5\") " pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.834290 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz566\" (UniqueName: \"kubernetes.io/projected/e3654e61-241e-4ea7-9b75-7f135d437ed5-kube-api-access-tz566\") pod \"redhat-operators-q42rh\" (UID: \"e3654e61-241e-4ea7-9b75-7f135d437ed5\") " pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.834361 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3654e61-241e-4ea7-9b75-7f135d437ed5-catalog-content\") pod \"redhat-operators-q42rh\" (UID: \"e3654e61-241e-4ea7-9b75-7f135d437ed5\") " pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.834400 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3654e61-241e-4ea7-9b75-7f135d437ed5-utilities\") pod \"redhat-operators-q42rh\" (UID: \"e3654e61-241e-4ea7-9b75-7f135d437ed5\") " pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.835186 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3654e61-241e-4ea7-9b75-7f135d437ed5-utilities\") pod \"redhat-operators-q42rh\" (UID: \"e3654e61-241e-4ea7-9b75-7f135d437ed5\") " pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.835884 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3654e61-241e-4ea7-9b75-7f135d437ed5-catalog-content\") pod \"redhat-operators-q42rh\" (UID: \"e3654e61-241e-4ea7-9b75-7f135d437ed5\") " pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.861657 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fct4h"] Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.864490 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.867035 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.910433 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz566\" (UniqueName: \"kubernetes.io/projected/e3654e61-241e-4ea7-9b75-7f135d437ed5-kube-api-access-tz566\") pod \"redhat-operators-q42rh\" (UID: \"e3654e61-241e-4ea7-9b75-7f135d437ed5\") " pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.912240 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p54q7" event={"ID":"2765471a-1d69-49cb-8d07-753b572fe408","Type":"ContainerStarted","Data":"bb6e9bebb0d1e0a34513d430994830e448c61c75b8f25c26aa859e18e4ecc937"} Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.916947 4895 generic.go:334] "Generic (PLEG): container finished" podID="a3b60df4-65e6-407a-b3ed-997271ae68b7" containerID="1462c7e20d0f5f4e6aacecba5d7825b4b14cf27816711fb7a2fb8eaae2256a1e" exitCode=0 Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.917017 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9l2w" event={"ID":"a3b60df4-65e6-407a-b3ed-997271ae68b7","Type":"ContainerDied","Data":"1462c7e20d0f5f4e6aacecba5d7825b4b14cf27816711fb7a2fb8eaae2256a1e"} Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.917478 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fct4h"] Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.935668 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ec8e528-cb81-403a-91e6-4dda3ece0f4e-utilities\") pod \"community-operators-fct4h\" (UID: \"7ec8e528-cb81-403a-91e6-4dda3ece0f4e\") " pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.935757 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ec8e528-cb81-403a-91e6-4dda3ece0f4e-catalog-content\") pod \"community-operators-fct4h\" (UID: \"7ec8e528-cb81-403a-91e6-4dda3ece0f4e\") " pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.935806 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xsqx\" (UniqueName: \"kubernetes.io/projected/7ec8e528-cb81-403a-91e6-4dda3ece0f4e-kube-api-access-7xsqx\") pod \"community-operators-fct4h\" (UID: \"7ec8e528-cb81-403a-91e6-4dda3ece0f4e\") " pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:03 crc kubenswrapper[4895]: I0129 16:17:03.982043 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.037573 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ec8e528-cb81-403a-91e6-4dda3ece0f4e-catalog-content\") pod \"community-operators-fct4h\" (UID: \"7ec8e528-cb81-403a-91e6-4dda3ece0f4e\") " pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.037681 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xsqx\" (UniqueName: \"kubernetes.io/projected/7ec8e528-cb81-403a-91e6-4dda3ece0f4e-kube-api-access-7xsqx\") pod \"community-operators-fct4h\" (UID: \"7ec8e528-cb81-403a-91e6-4dda3ece0f4e\") " pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.037902 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ec8e528-cb81-403a-91e6-4dda3ece0f4e-utilities\") pod \"community-operators-fct4h\" (UID: \"7ec8e528-cb81-403a-91e6-4dda3ece0f4e\") " pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.038725 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ec8e528-cb81-403a-91e6-4dda3ece0f4e-utilities\") pod \"community-operators-fct4h\" (UID: \"7ec8e528-cb81-403a-91e6-4dda3ece0f4e\") " pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.039157 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ec8e528-cb81-403a-91e6-4dda3ece0f4e-catalog-content\") pod \"community-operators-fct4h\" (UID: \"7ec8e528-cb81-403a-91e6-4dda3ece0f4e\") " pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.065989 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xsqx\" (UniqueName: \"kubernetes.io/projected/7ec8e528-cb81-403a-91e6-4dda3ece0f4e-kube-api-access-7xsqx\") pod \"community-operators-fct4h\" (UID: \"7ec8e528-cb81-403a-91e6-4dda3ece0f4e\") " pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.197830 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q42rh"] Jan 29 16:17:04 crc kubenswrapper[4895]: W0129 16:17:04.206015 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3654e61_241e_4ea7_9b75_7f135d437ed5.slice/crio-0ded58403479e2a580433c7a80be8e257438772d1ec613d26634a9c4264348dd WatchSource:0}: Error finding container 0ded58403479e2a580433c7a80be8e257438772d1ec613d26634a9c4264348dd: Status 404 returned error can't find the container with id 0ded58403479e2a580433c7a80be8e257438772d1ec613d26634a9c4264348dd Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.288684 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.531858 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fct4h"] Jan 29 16:17:04 crc kubenswrapper[4895]: W0129 16:17:04.564010 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ec8e528_cb81_403a_91e6_4dda3ece0f4e.slice/crio-53d2b0947213da9c35414c629d8348e46f9e3257479cdc664b5f28773a00178b WatchSource:0}: Error finding container 53d2b0947213da9c35414c629d8348e46f9e3257479cdc664b5f28773a00178b: Status 404 returned error can't find the container with id 53d2b0947213da9c35414c629d8348e46f9e3257479cdc664b5f28773a00178b Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.928224 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q9l2w" event={"ID":"a3b60df4-65e6-407a-b3ed-997271ae68b7","Type":"ContainerStarted","Data":"7291303db47d4377b9a161a1470d937ea2f5e632ee27e5ae9f9c8223c4cc06c1"} Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.931657 4895 generic.go:334] "Generic (PLEG): container finished" podID="e3654e61-241e-4ea7-9b75-7f135d437ed5" containerID="abc33f9fca474c4a2e175a4024d53d14e299714e9a8898105c98856c71dcc2e3" exitCode=0 Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.931714 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q42rh" event={"ID":"e3654e61-241e-4ea7-9b75-7f135d437ed5","Type":"ContainerDied","Data":"abc33f9fca474c4a2e175a4024d53d14e299714e9a8898105c98856c71dcc2e3"} Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.931740 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q42rh" event={"ID":"e3654e61-241e-4ea7-9b75-7f135d437ed5","Type":"ContainerStarted","Data":"0ded58403479e2a580433c7a80be8e257438772d1ec613d26634a9c4264348dd"} Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.936556 4895 generic.go:334] "Generic (PLEG): container finished" podID="2765471a-1d69-49cb-8d07-753b572fe408" containerID="bb6e9bebb0d1e0a34513d430994830e448c61c75b8f25c26aa859e18e4ecc937" exitCode=0 Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.936813 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p54q7" event={"ID":"2765471a-1d69-49cb-8d07-753b572fe408","Type":"ContainerDied","Data":"bb6e9bebb0d1e0a34513d430994830e448c61c75b8f25c26aa859e18e4ecc937"} Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.941505 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ec8e528-cb81-403a-91e6-4dda3ece0f4e" containerID="bb1c0a2727fb4972e9ecba169dc55248b5ab82700042fbca9bbf56cab4714e1e" exitCode=0 Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.941556 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fct4h" event={"ID":"7ec8e528-cb81-403a-91e6-4dda3ece0f4e","Type":"ContainerDied","Data":"bb1c0a2727fb4972e9ecba169dc55248b5ab82700042fbca9bbf56cab4714e1e"} Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.941592 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fct4h" event={"ID":"7ec8e528-cb81-403a-91e6-4dda3ece0f4e","Type":"ContainerStarted","Data":"53d2b0947213da9c35414c629d8348e46f9e3257479cdc664b5f28773a00178b"} Jan 29 16:17:04 crc kubenswrapper[4895]: I0129 16:17:04.995060 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 16:17:05 crc kubenswrapper[4895]: I0129 16:17:05.007340 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q9l2w" podStartSLOduration=2.2878571340000002 podStartE2EDuration="4.007315616s" podCreationTimestamp="2026-01-29 16:17:01 +0000 UTC" firstStartedPulling="2026-01-29 16:17:02.865229886 +0000 UTC m=+306.668207190" lastFinishedPulling="2026-01-29 16:17:04.584688408 +0000 UTC m=+308.387665672" observedRunningTime="2026-01-29 16:17:04.971596624 +0000 UTC m=+308.774573898" watchObservedRunningTime="2026-01-29 16:17:05.007315616 +0000 UTC m=+308.810292900" Jan 29 16:17:05 crc kubenswrapper[4895]: I0129 16:17:05.951120 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p54q7" event={"ID":"2765471a-1d69-49cb-8d07-753b572fe408","Type":"ContainerStarted","Data":"ab038d0035b4d5d49d7154012f41447cc6c59f56df8be79ff1dc9655b9ba2cc7"} Jan 29 16:17:05 crc kubenswrapper[4895]: I0129 16:17:05.983408 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p54q7" podStartSLOduration=2.496086036 podStartE2EDuration="4.983339559s" podCreationTimestamp="2026-01-29 16:17:01 +0000 UTC" firstStartedPulling="2026-01-29 16:17:02.861785238 +0000 UTC m=+306.664762532" lastFinishedPulling="2026-01-29 16:17:05.349038791 +0000 UTC m=+309.152016055" observedRunningTime="2026-01-29 16:17:05.977426851 +0000 UTC m=+309.780404145" watchObservedRunningTime="2026-01-29 16:17:05.983339559 +0000 UTC m=+309.786316893" Jan 29 16:17:06 crc kubenswrapper[4895]: I0129 16:17:06.534959 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 16:17:06 crc kubenswrapper[4895]: I0129 16:17:06.960026 4895 generic.go:334] "Generic (PLEG): container finished" podID="7ec8e528-cb81-403a-91e6-4dda3ece0f4e" containerID="29ebee8983999471f908591c49add1877983ccb37aa88028c109799f0a028a8d" exitCode=0 Jan 29 16:17:06 crc kubenswrapper[4895]: I0129 16:17:06.960109 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fct4h" event={"ID":"7ec8e528-cb81-403a-91e6-4dda3ece0f4e","Type":"ContainerDied","Data":"29ebee8983999471f908591c49add1877983ccb37aa88028c109799f0a028a8d"} Jan 29 16:17:06 crc kubenswrapper[4895]: I0129 16:17:06.968507 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q42rh" event={"ID":"e3654e61-241e-4ea7-9b75-7f135d437ed5","Type":"ContainerStarted","Data":"3cd0044d27198cf7da78ba7c1e1e193992a3e9e50b57fb375f675016a5234cdf"} Jan 29 16:17:07 crc kubenswrapper[4895]: I0129 16:17:07.975779 4895 generic.go:334] "Generic (PLEG): container finished" podID="e3654e61-241e-4ea7-9b75-7f135d437ed5" containerID="3cd0044d27198cf7da78ba7c1e1e193992a3e9e50b57fb375f675016a5234cdf" exitCode=0 Jan 29 16:17:07 crc kubenswrapper[4895]: I0129 16:17:07.975860 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q42rh" event={"ID":"e3654e61-241e-4ea7-9b75-7f135d437ed5","Type":"ContainerDied","Data":"3cd0044d27198cf7da78ba7c1e1e193992a3e9e50b57fb375f675016a5234cdf"} Jan 29 16:17:07 crc kubenswrapper[4895]: I0129 16:17:07.979097 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fct4h" event={"ID":"7ec8e528-cb81-403a-91e6-4dda3ece0f4e","Type":"ContainerStarted","Data":"3bf9bdcbadea2e57610080f08bc7d552f0bc28fd2f99f189f2ab43d58198bbab"} Jan 29 16:17:08 crc kubenswrapper[4895]: I0129 16:17:08.022995 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fct4h" podStartSLOduration=2.566966738 podStartE2EDuration="5.022973056s" podCreationTimestamp="2026-01-29 16:17:03 +0000 UTC" firstStartedPulling="2026-01-29 16:17:04.943639951 +0000 UTC m=+308.746617215" lastFinishedPulling="2026-01-29 16:17:07.399646269 +0000 UTC m=+311.202623533" observedRunningTime="2026-01-29 16:17:08.022622426 +0000 UTC m=+311.825599720" watchObservedRunningTime="2026-01-29 16:17:08.022973056 +0000 UTC m=+311.825950320" Jan 29 16:17:08 crc kubenswrapper[4895]: I0129 16:17:08.987637 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q42rh" event={"ID":"e3654e61-241e-4ea7-9b75-7f135d437ed5","Type":"ContainerStarted","Data":"5b282d0f9894e4915ada126e08d23a3c265617ead1414d2987bc3b243de7cc3c"} Jan 29 16:17:09 crc kubenswrapper[4895]: I0129 16:17:09.015961 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q42rh" podStartSLOduration=2.542607768 podStartE2EDuration="6.015930648s" podCreationTimestamp="2026-01-29 16:17:03 +0000 UTC" firstStartedPulling="2026-01-29 16:17:04.935232003 +0000 UTC m=+308.738209277" lastFinishedPulling="2026-01-29 16:17:08.408554893 +0000 UTC m=+312.211532157" observedRunningTime="2026-01-29 16:17:09.011339837 +0000 UTC m=+312.814317131" watchObservedRunningTime="2026-01-29 16:17:09.015930648 +0000 UTC m=+312.818907932" Jan 29 16:17:09 crc kubenswrapper[4895]: I0129 16:17:09.521036 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:17:09 crc kubenswrapper[4895]: I0129 16:17:09.526557 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:17:10 crc kubenswrapper[4895]: I0129 16:17:10.002191 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:17:11 crc kubenswrapper[4895]: I0129 16:17:11.622118 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:11 crc kubenswrapper[4895]: I0129 16:17:11.622177 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:11 crc kubenswrapper[4895]: I0129 16:17:11.680476 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:11 crc kubenswrapper[4895]: I0129 16:17:11.786985 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:11 crc kubenswrapper[4895]: I0129 16:17:11.787086 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:11 crc kubenswrapper[4895]: I0129 16:17:11.827346 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:12 crc kubenswrapper[4895]: I0129 16:17:12.048146 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p54q7" Jan 29 16:17:12 crc kubenswrapper[4895]: I0129 16:17:12.082329 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q9l2w" Jan 29 16:17:13 crc kubenswrapper[4895]: I0129 16:17:13.982507 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:13 crc kubenswrapper[4895]: I0129 16:17:13.985495 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:14 crc kubenswrapper[4895]: I0129 16:17:14.289450 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:14 crc kubenswrapper[4895]: I0129 16:17:14.289560 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:14 crc kubenswrapper[4895]: I0129 16:17:14.337119 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:15 crc kubenswrapper[4895]: I0129 16:17:15.026565 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q42rh" podUID="e3654e61-241e-4ea7-9b75-7f135d437ed5" containerName="registry-server" probeResult="failure" output=< Jan 29 16:17:15 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 16:17:15 crc kubenswrapper[4895]: > Jan 29 16:17:15 crc kubenswrapper[4895]: I0129 16:17:15.090060 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fct4h" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.052224 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.067669 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-scmbk"] Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.068521 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.070535 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.070713 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.076360 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.131388 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-scmbk"] Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.142780 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r8fhh"] Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.143083 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" podUID="53da942c-31af-4b5b-9e63-4e53147ad257" containerName="controller-manager" containerID="cri-o://d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491" gracePeriod=30 Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.149267 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp"] Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.149651 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" podUID="5e29d559-3a15-4a8f-9494-6c5d4cf4c642" containerName="route-controller-manager" containerID="cri-o://1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848" gracePeriod=30 Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.186078 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q42rh" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.235536 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c105ff6-00bb-4637-8e66-2f7899e80bdf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-scmbk\" (UID: \"6c105ff6-00bb-4637-8e66-2f7899e80bdf\") " pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.235659 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c105ff6-00bb-4637-8e66-2f7899e80bdf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-scmbk\" (UID: \"6c105ff6-00bb-4637-8e66-2f7899e80bdf\") " pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.235705 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pqb6\" (UniqueName: \"kubernetes.io/projected/6c105ff6-00bb-4637-8e66-2f7899e80bdf-kube-api-access-4pqb6\") pod \"marketplace-operator-79b997595-scmbk\" (UID: \"6c105ff6-00bb-4637-8e66-2f7899e80bdf\") " pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.337299 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c105ff6-00bb-4637-8e66-2f7899e80bdf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-scmbk\" (UID: \"6c105ff6-00bb-4637-8e66-2f7899e80bdf\") " pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.338058 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c105ff6-00bb-4637-8e66-2f7899e80bdf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-scmbk\" (UID: \"6c105ff6-00bb-4637-8e66-2f7899e80bdf\") " pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.338191 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pqb6\" (UniqueName: \"kubernetes.io/projected/6c105ff6-00bb-4637-8e66-2f7899e80bdf-kube-api-access-4pqb6\") pod \"marketplace-operator-79b997595-scmbk\" (UID: \"6c105ff6-00bb-4637-8e66-2f7899e80bdf\") " pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.339158 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c105ff6-00bb-4637-8e66-2f7899e80bdf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-scmbk\" (UID: \"6c105ff6-00bb-4637-8e66-2f7899e80bdf\") " pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.362207 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6c105ff6-00bb-4637-8e66-2f7899e80bdf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-scmbk\" (UID: \"6c105ff6-00bb-4637-8e66-2f7899e80bdf\") " pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.365666 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pqb6\" (UniqueName: \"kubernetes.io/projected/6c105ff6-00bb-4637-8e66-2f7899e80bdf-kube-api-access-4pqb6\") pod \"marketplace-operator-79b997595-scmbk\" (UID: \"6c105ff6-00bb-4637-8e66-2f7899e80bdf\") " pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.427600 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.656845 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.656845 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.751192 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-client-ca\") pod \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.751238 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qfgc\" (UniqueName: \"kubernetes.io/projected/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-kube-api-access-4qfgc\") pod \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.751317 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz89h\" (UniqueName: \"kubernetes.io/projected/53da942c-31af-4b5b-9e63-4e53147ad257-kube-api-access-jz89h\") pod \"53da942c-31af-4b5b-9e63-4e53147ad257\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.751382 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-client-ca\") pod \"53da942c-31af-4b5b-9e63-4e53147ad257\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.751416 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-config\") pod \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.751448 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-proxy-ca-bundles\") pod \"53da942c-31af-4b5b-9e63-4e53147ad257\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.751474 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53da942c-31af-4b5b-9e63-4e53147ad257-serving-cert\") pod \"53da942c-31af-4b5b-9e63-4e53147ad257\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.751495 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-config\") pod \"53da942c-31af-4b5b-9e63-4e53147ad257\" (UID: \"53da942c-31af-4b5b-9e63-4e53147ad257\") " Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.751535 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-serving-cert\") pod \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\" (UID: \"5e29d559-3a15-4a8f-9494-6c5d4cf4c642\") " Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.752692 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e29d559-3a15-4a8f-9494-6c5d4cf4c642" (UID: "5e29d559-3a15-4a8f-9494-6c5d4cf4c642"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.753106 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-config" (OuterVolumeSpecName: "config") pod "5e29d559-3a15-4a8f-9494-6c5d4cf4c642" (UID: "5e29d559-3a15-4a8f-9494-6c5d4cf4c642"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.753309 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "53da942c-31af-4b5b-9e63-4e53147ad257" (UID: "53da942c-31af-4b5b-9e63-4e53147ad257"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.753802 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-client-ca" (OuterVolumeSpecName: "client-ca") pod "53da942c-31af-4b5b-9e63-4e53147ad257" (UID: "53da942c-31af-4b5b-9e63-4e53147ad257"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.754186 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-config" (OuterVolumeSpecName: "config") pod "53da942c-31af-4b5b-9e63-4e53147ad257" (UID: "53da942c-31af-4b5b-9e63-4e53147ad257"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.758718 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53da942c-31af-4b5b-9e63-4e53147ad257-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "53da942c-31af-4b5b-9e63-4e53147ad257" (UID: "53da942c-31af-4b5b-9e63-4e53147ad257"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.758714 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53da942c-31af-4b5b-9e63-4e53147ad257-kube-api-access-jz89h" (OuterVolumeSpecName: "kube-api-access-jz89h") pod "53da942c-31af-4b5b-9e63-4e53147ad257" (UID: "53da942c-31af-4b5b-9e63-4e53147ad257"). InnerVolumeSpecName "kube-api-access-jz89h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.758826 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e29d559-3a15-4a8f-9494-6c5d4cf4c642" (UID: "5e29d559-3a15-4a8f-9494-6c5d4cf4c642"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.764414 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-kube-api-access-4qfgc" (OuterVolumeSpecName: "kube-api-access-4qfgc") pod "5e29d559-3a15-4a8f-9494-6c5d4cf4c642" (UID: "5e29d559-3a15-4a8f-9494-6c5d4cf4c642"). InnerVolumeSpecName "kube-api-access-4qfgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.852470 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.852532 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qfgc\" (UniqueName: \"kubernetes.io/projected/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-kube-api-access-4qfgc\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.852546 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz89h\" (UniqueName: \"kubernetes.io/projected/53da942c-31af-4b5b-9e63-4e53147ad257-kube-api-access-jz89h\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.852556 4895 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.852567 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.852576 4895 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.852587 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53da942c-31af-4b5b-9e63-4e53147ad257-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.852595 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53da942c-31af-4b5b-9e63-4e53147ad257-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.852603 4895 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e29d559-3a15-4a8f-9494-6c5d4cf4c642-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:17:24 crc kubenswrapper[4895]: I0129 16:17:24.974977 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-scmbk"] Jan 29 16:17:24 crc kubenswrapper[4895]: W0129 16:17:24.980251 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c105ff6_00bb_4637_8e66_2f7899e80bdf.slice/crio-d0f8ad328e05b0b238958c382e6b8f6de45ab1605bf41a56a6b7fb2d55e20293 WatchSource:0}: Error finding container d0f8ad328e05b0b238958c382e6b8f6de45ab1605bf41a56a6b7fb2d55e20293: Status 404 returned error can't find the container with id d0f8ad328e05b0b238958c382e6b8f6de45ab1605bf41a56a6b7fb2d55e20293 Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.125307 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" event={"ID":"6c105ff6-00bb-4637-8e66-2f7899e80bdf","Type":"ContainerStarted","Data":"bf6b0ce2b03e03a988b0d005c896c4bf7869d301ebb63019feb5b981654ba90e"} Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.125370 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" event={"ID":"6c105ff6-00bb-4637-8e66-2f7899e80bdf","Type":"ContainerStarted","Data":"d0f8ad328e05b0b238958c382e6b8f6de45ab1605bf41a56a6b7fb2d55e20293"} Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.125605 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.127678 4895 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-scmbk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" start-of-body= Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.127724 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" podUID="6c105ff6-00bb-4637-8e66-2f7899e80bdf" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.128155 4895 generic.go:334] "Generic (PLEG): container finished" podID="5e29d559-3a15-4a8f-9494-6c5d4cf4c642" containerID="1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848" exitCode=0 Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.128207 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.128305 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" event={"ID":"5e29d559-3a15-4a8f-9494-6c5d4cf4c642","Type":"ContainerDied","Data":"1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848"} Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.128357 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp" event={"ID":"5e29d559-3a15-4a8f-9494-6c5d4cf4c642","Type":"ContainerDied","Data":"e6385a9f789d4f55010636af2ae666e0e00bb1234ae3fd362a059bcfd4984cc6"} Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.128382 4895 scope.go:117] "RemoveContainer" containerID="1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.130669 4895 generic.go:334] "Generic (PLEG): container finished" podID="53da942c-31af-4b5b-9e63-4e53147ad257" containerID="d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491" exitCode=0 Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.130769 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" event={"ID":"53da942c-31af-4b5b-9e63-4e53147ad257","Type":"ContainerDied","Data":"d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491"} Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.130809 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" event={"ID":"53da942c-31af-4b5b-9e63-4e53147ad257","Type":"ContainerDied","Data":"2f8412e439ff1e93a65449219a0d4b00122bb2cd151d75b763d7bccda725ab56"} Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.130891 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r8fhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.150859 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" podStartSLOduration=1.150840194 podStartE2EDuration="1.150840194s" podCreationTimestamp="2026-01-29 16:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:17:25.146773179 +0000 UTC m=+328.949750443" watchObservedRunningTime="2026-01-29 16:17:25.150840194 +0000 UTC m=+328.953817458" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.152961 4895 scope.go:117] "RemoveContainer" containerID="1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848" Jan 29 16:17:25 crc kubenswrapper[4895]: E0129 16:17:25.153604 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848\": container with ID starting with 1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848 not found: ID does not exist" containerID="1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.153666 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848"} err="failed to get container status \"1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848\": rpc error: code = NotFound desc = could not find container \"1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848\": container with ID starting with 1cd6de05366e742f067a89336942ac0cc9cddf8701de1185ed5a4e78dfd03848 not found: ID does not exist" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.153708 4895 scope.go:117] "RemoveContainer" containerID="d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.169146 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r8fhh"] Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.175480 4895 scope.go:117] "RemoveContainer" containerID="d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491" Jan 29 16:17:25 crc kubenswrapper[4895]: E0129 16:17:25.176089 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491\": container with ID starting with d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491 not found: ID does not exist" containerID="d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.176153 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491"} err="failed to get container status \"d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491\": rpc error: code = NotFound desc = could not find container \"d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491\": container with ID starting with d670613031132712b865fb8cc5c29993610f7eb526690445ce8db24f8047a491 not found: ID does not exist" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.180367 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r8fhh"] Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.185081 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp"] Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.192471 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z4xsp"] Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.628501 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bc744f758-zhhhh"] Jan 29 16:17:25 crc kubenswrapper[4895]: E0129 16:17:25.628839 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53da942c-31af-4b5b-9e63-4e53147ad257" containerName="controller-manager" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.628855 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="53da942c-31af-4b5b-9e63-4e53147ad257" containerName="controller-manager" Jan 29 16:17:25 crc kubenswrapper[4895]: E0129 16:17:25.628896 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e29d559-3a15-4a8f-9494-6c5d4cf4c642" containerName="route-controller-manager" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.628903 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e29d559-3a15-4a8f-9494-6c5d4cf4c642" containerName="route-controller-manager" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.629033 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e29d559-3a15-4a8f-9494-6c5d4cf4c642" containerName="route-controller-manager" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.629051 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="53da942c-31af-4b5b-9e63-4e53147ad257" containerName="controller-manager" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.629571 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.631659 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv"] Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.632239 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.632249 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.632427 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.632466 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.632589 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.633028 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.634027 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.634332 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.635279 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.635431 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.635554 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.635661 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.637102 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.642607 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.661600 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bc744f758-zhhhh"] Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.664951 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a018e6-48a4-462d-b96a-d377910faa01-serving-cert\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.664994 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a018e6-48a4-462d-b96a-d377910faa01-config\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.665054 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a018e6-48a4-462d-b96a-d377910faa01-proxy-ca-bundles\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.665095 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-serving-cert\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.665126 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjb9c\" (UniqueName: \"kubernetes.io/projected/39a018e6-48a4-462d-b96a-d377910faa01-kube-api-access-fjb9c\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.665154 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-client-ca\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.665192 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cwqm\" (UniqueName: \"kubernetes.io/projected/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-kube-api-access-9cwqm\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.665243 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a018e6-48a4-462d-b96a-d377910faa01-client-ca\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.665308 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-config\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.666591 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv"] Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.766605 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a018e6-48a4-462d-b96a-d377910faa01-serving-cert\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.766666 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a018e6-48a4-462d-b96a-d377910faa01-config\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.766704 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a018e6-48a4-462d-b96a-d377910faa01-proxy-ca-bundles\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.766734 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-serving-cert\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.766757 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjb9c\" (UniqueName: \"kubernetes.io/projected/39a018e6-48a4-462d-b96a-d377910faa01-kube-api-access-fjb9c\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.766780 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-client-ca\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.766810 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cwqm\" (UniqueName: \"kubernetes.io/projected/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-kube-api-access-9cwqm\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.766836 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a018e6-48a4-462d-b96a-d377910faa01-client-ca\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.766854 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-config\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.768395 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-client-ca\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.768469 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a018e6-48a4-462d-b96a-d377910faa01-client-ca\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.768556 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-config\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.768551 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a018e6-48a4-462d-b96a-d377910faa01-proxy-ca-bundles\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.768986 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a018e6-48a4-462d-b96a-d377910faa01-config\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.776251 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a018e6-48a4-462d-b96a-d377910faa01-serving-cert\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.777024 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-serving-cert\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.787719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cwqm\" (UniqueName: \"kubernetes.io/projected/559f3f17-3e73-45fc-b90b-cf5e5ac420f6-kube-api-access-9cwqm\") pod \"route-controller-manager-8487d79fd6-87vlv\" (UID: \"559f3f17-3e73-45fc-b90b-cf5e5ac420f6\") " pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.801249 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjb9c\" (UniqueName: \"kubernetes.io/projected/39a018e6-48a4-462d-b96a-d377910faa01-kube-api-access-fjb9c\") pod \"controller-manager-5bc744f758-zhhhh\" (UID: \"39a018e6-48a4-462d-b96a-d377910faa01\") " pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.952028 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:25 crc kubenswrapper[4895]: I0129 16:17:25.960395 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:26 crc kubenswrapper[4895]: I0129 16:17:26.147716 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-scmbk" Jan 29 16:17:26 crc kubenswrapper[4895]: W0129 16:17:26.186503 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39a018e6_48a4_462d_b96a_d377910faa01.slice/crio-7b2e8369424fa78330a3bdc202f0d1c63fc18ca08d2c305a6786cc55d35c7334 WatchSource:0}: Error finding container 7b2e8369424fa78330a3bdc202f0d1c63fc18ca08d2c305a6786cc55d35c7334: Status 404 returned error can't find the container with id 7b2e8369424fa78330a3bdc202f0d1c63fc18ca08d2c305a6786cc55d35c7334 Jan 29 16:17:26 crc kubenswrapper[4895]: I0129 16:17:26.188116 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bc744f758-zhhhh"] Jan 29 16:17:26 crc kubenswrapper[4895]: I0129 16:17:26.229600 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv"] Jan 29 16:17:26 crc kubenswrapper[4895]: W0129 16:17:26.233397 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559f3f17_3e73_45fc_b90b_cf5e5ac420f6.slice/crio-a407a6eb428f75d269576c5131e7616d47b0d87dad34fe682c1fcbe712d092f8 WatchSource:0}: Error finding container a407a6eb428f75d269576c5131e7616d47b0d87dad34fe682c1fcbe712d092f8: Status 404 returned error can't find the container with id a407a6eb428f75d269576c5131e7616d47b0d87dad34fe682c1fcbe712d092f8 Jan 29 16:17:27 crc kubenswrapper[4895]: I0129 16:17:27.046293 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53da942c-31af-4b5b-9e63-4e53147ad257" path="/var/lib/kubelet/pods/53da942c-31af-4b5b-9e63-4e53147ad257/volumes" Jan 29 16:17:27 crc kubenswrapper[4895]: I0129 16:17:27.047360 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e29d559-3a15-4a8f-9494-6c5d4cf4c642" path="/var/lib/kubelet/pods/5e29d559-3a15-4a8f-9494-6c5d4cf4c642/volumes" Jan 29 16:17:27 crc kubenswrapper[4895]: I0129 16:17:27.149348 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" event={"ID":"559f3f17-3e73-45fc-b90b-cf5e5ac420f6","Type":"ContainerStarted","Data":"40ed73ae7525243a8870f5ee03a0529247c45754c970663fb5badd2463f7514e"} Jan 29 16:17:27 crc kubenswrapper[4895]: I0129 16:17:27.149425 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" event={"ID":"559f3f17-3e73-45fc-b90b-cf5e5ac420f6","Type":"ContainerStarted","Data":"a407a6eb428f75d269576c5131e7616d47b0d87dad34fe682c1fcbe712d092f8"} Jan 29 16:17:27 crc kubenswrapper[4895]: I0129 16:17:27.149510 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:27 crc kubenswrapper[4895]: I0129 16:17:27.152132 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" event={"ID":"39a018e6-48a4-462d-b96a-d377910faa01","Type":"ContainerStarted","Data":"cd6925013663b61b79159ca1ff23a75c93d31b809804debbb69d00e7d3e0cffd"} Jan 29 16:17:27 crc kubenswrapper[4895]: I0129 16:17:27.152202 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" event={"ID":"39a018e6-48a4-462d-b96a-d377910faa01","Type":"ContainerStarted","Data":"7b2e8369424fa78330a3bdc202f0d1c63fc18ca08d2c305a6786cc55d35c7334"} Jan 29 16:17:27 crc kubenswrapper[4895]: I0129 16:17:27.159143 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" Jan 29 16:17:27 crc kubenswrapper[4895]: I0129 16:17:27.171082 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8487d79fd6-87vlv" podStartSLOduration=3.171058992 podStartE2EDuration="3.171058992s" podCreationTimestamp="2026-01-29 16:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:17:27.166329508 +0000 UTC m=+330.969306792" watchObservedRunningTime="2026-01-29 16:17:27.171058992 +0000 UTC m=+330.974036256" Jan 29 16:17:27 crc kubenswrapper[4895]: I0129 16:17:27.190414 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" podStartSLOduration=3.19038971 podStartE2EDuration="3.19038971s" podCreationTimestamp="2026-01-29 16:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:17:27.187029965 +0000 UTC m=+330.990007229" watchObservedRunningTime="2026-01-29 16:17:27.19038971 +0000 UTC m=+330.993366984" Jan 29 16:17:28 crc kubenswrapper[4895]: I0129 16:17:28.158373 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:28 crc kubenswrapper[4895]: I0129 16:17:28.163404 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bc744f758-zhhhh" Jan 29 16:17:57 crc kubenswrapper[4895]: I0129 16:17:57.823801 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:17:57 crc kubenswrapper[4895]: I0129 16:17:57.824496 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.735009 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xw2pn"] Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.736913 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.758333 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xw2pn"] Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.897670 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b5300ed8-f7fb-4169-afee-62c95445f5a1-registry-tls\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.897980 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b5300ed8-f7fb-4169-afee-62c95445f5a1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.898080 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b5300ed8-f7fb-4169-afee-62c95445f5a1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.898209 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.898302 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b5300ed8-f7fb-4169-afee-62c95445f5a1-registry-certificates\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.898384 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5300ed8-f7fb-4169-afee-62c95445f5a1-bound-sa-token\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.898467 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5300ed8-f7fb-4169-afee-62c95445f5a1-trusted-ca\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.898537 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdgm4\" (UniqueName: \"kubernetes.io/projected/b5300ed8-f7fb-4169-afee-62c95445f5a1-kube-api-access-fdgm4\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.922531 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.999500 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5300ed8-f7fb-4169-afee-62c95445f5a1-bound-sa-token\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.999572 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5300ed8-f7fb-4169-afee-62c95445f5a1-trusted-ca\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.999602 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdgm4\" (UniqueName: \"kubernetes.io/projected/b5300ed8-f7fb-4169-afee-62c95445f5a1-kube-api-access-fdgm4\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:11 crc kubenswrapper[4895]: I0129 16:18:11.999655 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b5300ed8-f7fb-4169-afee-62c95445f5a1-registry-tls\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:11.999755 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b5300ed8-f7fb-4169-afee-62c95445f5a1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:11.999798 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b5300ed8-f7fb-4169-afee-62c95445f5a1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:11.999844 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b5300ed8-f7fb-4169-afee-62c95445f5a1-registry-certificates\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:12.000980 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b5300ed8-f7fb-4169-afee-62c95445f5a1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:12.001937 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5300ed8-f7fb-4169-afee-62c95445f5a1-trusted-ca\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:12.001992 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b5300ed8-f7fb-4169-afee-62c95445f5a1-registry-certificates\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:12.014092 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b5300ed8-f7fb-4169-afee-62c95445f5a1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:12.014183 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b5300ed8-f7fb-4169-afee-62c95445f5a1-registry-tls\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:12.021194 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdgm4\" (UniqueName: \"kubernetes.io/projected/b5300ed8-f7fb-4169-afee-62c95445f5a1-kube-api-access-fdgm4\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:12.024768 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5300ed8-f7fb-4169-afee-62c95445f5a1-bound-sa-token\") pod \"image-registry-66df7c8f76-xw2pn\" (UID: \"b5300ed8-f7fb-4169-afee-62c95445f5a1\") " pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:12.059144 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:12 crc kubenswrapper[4895]: I0129 16:18:12.488146 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xw2pn"] Jan 29 16:18:13 crc kubenswrapper[4895]: I0129 16:18:13.474816 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" event={"ID":"b5300ed8-f7fb-4169-afee-62c95445f5a1","Type":"ContainerStarted","Data":"eb859e405c07adf54e84274f266325582ab7fff8d2b0e26052fafe9e138c9908"} Jan 29 16:18:13 crc kubenswrapper[4895]: I0129 16:18:13.474897 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" event={"ID":"b5300ed8-f7fb-4169-afee-62c95445f5a1","Type":"ContainerStarted","Data":"8c6940f9756e2140ee6f1b9f4bcc8c8b7afa715d5d58020484f283b27d48376e"} Jan 29 16:18:13 crc kubenswrapper[4895]: I0129 16:18:13.475013 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:13 crc kubenswrapper[4895]: I0129 16:18:13.498332 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" podStartSLOduration=2.498304415 podStartE2EDuration="2.498304415s" podCreationTimestamp="2026-01-29 16:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:18:13.49669351 +0000 UTC m=+377.299670784" watchObservedRunningTime="2026-01-29 16:18:13.498304415 +0000 UTC m=+377.301281679" Jan 29 16:18:27 crc kubenswrapper[4895]: I0129 16:18:27.823073 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:18:27 crc kubenswrapper[4895]: I0129 16:18:27.824008 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:18:32 crc kubenswrapper[4895]: I0129 16:18:32.068598 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xw2pn" Jan 29 16:18:32 crc kubenswrapper[4895]: I0129 16:18:32.145214 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9v2kn"] Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.195574 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" podUID="3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" containerName="registry" containerID="cri-o://c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e" gracePeriod=30 Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.559512 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.650266 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-installation-pull-secrets\") pod \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.650419 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-bound-sa-token\") pod \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.650451 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-trusted-ca\") pod \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.650489 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-ca-trust-extracted\") pod \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.650524 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-certificates\") pod \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.650580 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5hlk\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-kube-api-access-l5hlk\") pod \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.650612 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-tls\") pod \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.650911 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\" (UID: \"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e\") " Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.653495 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.653937 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.662740 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.665415 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.666102 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-kube-api-access-l5hlk" (OuterVolumeSpecName: "kube-api-access-l5hlk") pod "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e"). InnerVolumeSpecName "kube-api-access-l5hlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.668726 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.668899 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.669296 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" (UID: "3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.751762 4895 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.751806 4895 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.751818 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.751829 4895 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.751842 4895 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.751851 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5hlk\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-kube-api-access-l5hlk\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.751860 4895 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.791662 4895 generic.go:334] "Generic (PLEG): container finished" podID="3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" containerID="c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e" exitCode=0 Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.791739 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" event={"ID":"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e","Type":"ContainerDied","Data":"c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e"} Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.791775 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.791811 4895 scope.go:117] "RemoveContainer" containerID="c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.791789 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9v2kn" event={"ID":"3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e","Type":"ContainerDied","Data":"ce601eaf30554e24e0d15851730c5113bad1a0e153a1f80787d7027ba3a8bb16"} Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.818292 4895 scope.go:117] "RemoveContainer" containerID="c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e" Jan 29 16:18:57 crc kubenswrapper[4895]: E0129 16:18:57.819084 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e\": container with ID starting with c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e not found: ID does not exist" containerID="c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.819151 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e"} err="failed to get container status \"c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e\": rpc error: code = NotFound desc = could not find container \"c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e\": container with ID starting with c429509d9e9a86fea2566fabeed07ef9c7d39f49bd27b7f28fecb7af8e12c74e not found: ID does not exist" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.829405 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.829534 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.829911 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.831009 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56067da70a2eed3187e4fd8c7753eb01f974e7c0d15e185a67c2e03037c7bf90"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.831102 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://56067da70a2eed3187e4fd8c7753eb01f974e7c0d15e185a67c2e03037c7bf90" gracePeriod=600 Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.832051 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9v2kn"] Jan 29 16:18:57 crc kubenswrapper[4895]: I0129 16:18:57.839257 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9v2kn"] Jan 29 16:18:58 crc kubenswrapper[4895]: I0129 16:18:58.804812 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="56067da70a2eed3187e4fd8c7753eb01f974e7c0d15e185a67c2e03037c7bf90" exitCode=0 Jan 29 16:18:58 crc kubenswrapper[4895]: I0129 16:18:58.805036 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"56067da70a2eed3187e4fd8c7753eb01f974e7c0d15e185a67c2e03037c7bf90"} Jan 29 16:18:58 crc kubenswrapper[4895]: I0129 16:18:58.805427 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"b99c1c7666a18a4fff479ed291067c0500fca6ffc17eb2b91e878cb7ce4ad701"} Jan 29 16:18:58 crc kubenswrapper[4895]: I0129 16:18:58.805469 4895 scope.go:117] "RemoveContainer" containerID="2e20ae982c3c08edbe62f04934e293bad08e3d4632e97a190ee81a409341ad6b" Jan 29 16:18:59 crc kubenswrapper[4895]: I0129 16:18:59.049091 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" path="/var/lib/kubelet/pods/3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e/volumes" Jan 29 16:21:27 crc kubenswrapper[4895]: I0129 16:21:27.823043 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:21:27 crc kubenswrapper[4895]: I0129 16:21:27.823832 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.263075 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rfvkt"] Jan 29 16:21:54 crc kubenswrapper[4895]: E0129 16:21:54.264232 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" containerName="registry" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.264249 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" containerName="registry" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.264383 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ca5a101-cfa3-4c2b-b5b1-93fb718ef90e" containerName="registry" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.264902 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rfvkt" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.268616 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.268953 4895 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-w7kdl" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.269534 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.285212 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-5qzkz"] Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.286484 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-5qzkz" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.286749 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rfvkt"] Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.289371 4895 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-m9859" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.302787 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-5qzkz"] Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.302844 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-h2qtn"] Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.305573 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-h2qtn" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.307399 4895 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-zb72k" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.314851 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-h2qtn"] Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.433810 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz47r\" (UniqueName: \"kubernetes.io/projected/14bf5370-0eb3-41bf-a14a-0115f945a9bb-kube-api-access-tz47r\") pod \"cert-manager-cainjector-cf98fcc89-rfvkt\" (UID: \"14bf5370-0eb3-41bf-a14a-0115f945a9bb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rfvkt" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.433860 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg687\" (UniqueName: \"kubernetes.io/projected/18e923cd-60ff-4beb-8e93-52e824bfd999-kube-api-access-rg687\") pod \"cert-manager-858654f9db-5qzkz\" (UID: \"18e923cd-60ff-4beb-8e93-52e824bfd999\") " pod="cert-manager/cert-manager-858654f9db-5qzkz" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.433982 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp9jz\" (UniqueName: \"kubernetes.io/projected/15590504-595f-4b06-a0a1-5f25e83967ec-kube-api-access-bp9jz\") pod \"cert-manager-webhook-687f57d79b-h2qtn\" (UID: \"15590504-595f-4b06-a0a1-5f25e83967ec\") " pod="cert-manager/cert-manager-webhook-687f57d79b-h2qtn" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.535350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp9jz\" (UniqueName: \"kubernetes.io/projected/15590504-595f-4b06-a0a1-5f25e83967ec-kube-api-access-bp9jz\") pod \"cert-manager-webhook-687f57d79b-h2qtn\" (UID: \"15590504-595f-4b06-a0a1-5f25e83967ec\") " pod="cert-manager/cert-manager-webhook-687f57d79b-h2qtn" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.535447 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz47r\" (UniqueName: \"kubernetes.io/projected/14bf5370-0eb3-41bf-a14a-0115f945a9bb-kube-api-access-tz47r\") pod \"cert-manager-cainjector-cf98fcc89-rfvkt\" (UID: \"14bf5370-0eb3-41bf-a14a-0115f945a9bb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rfvkt" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.535476 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg687\" (UniqueName: \"kubernetes.io/projected/18e923cd-60ff-4beb-8e93-52e824bfd999-kube-api-access-rg687\") pod \"cert-manager-858654f9db-5qzkz\" (UID: \"18e923cd-60ff-4beb-8e93-52e824bfd999\") " pod="cert-manager/cert-manager-858654f9db-5qzkz" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.557062 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz47r\" (UniqueName: \"kubernetes.io/projected/14bf5370-0eb3-41bf-a14a-0115f945a9bb-kube-api-access-tz47r\") pod \"cert-manager-cainjector-cf98fcc89-rfvkt\" (UID: \"14bf5370-0eb3-41bf-a14a-0115f945a9bb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rfvkt" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.558263 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp9jz\" (UniqueName: \"kubernetes.io/projected/15590504-595f-4b06-a0a1-5f25e83967ec-kube-api-access-bp9jz\") pod \"cert-manager-webhook-687f57d79b-h2qtn\" (UID: \"15590504-595f-4b06-a0a1-5f25e83967ec\") " pod="cert-manager/cert-manager-webhook-687f57d79b-h2qtn" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.560308 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg687\" (UniqueName: \"kubernetes.io/projected/18e923cd-60ff-4beb-8e93-52e824bfd999-kube-api-access-rg687\") pod \"cert-manager-858654f9db-5qzkz\" (UID: \"18e923cd-60ff-4beb-8e93-52e824bfd999\") " pod="cert-manager/cert-manager-858654f9db-5qzkz" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.587441 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rfvkt" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.600575 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-5qzkz" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.628840 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-h2qtn" Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.956329 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-h2qtn"] Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.969475 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:21:54 crc kubenswrapper[4895]: I0129 16:21:54.992367 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-h2qtn" event={"ID":"15590504-595f-4b06-a0a1-5f25e83967ec","Type":"ContainerStarted","Data":"0cc22febea4c6c908ef10371ddd6fcf1ae7c1fb26771c59b6598698b78866077"} Jan 29 16:21:55 crc kubenswrapper[4895]: I0129 16:21:55.088643 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-5qzkz"] Jan 29 16:21:55 crc kubenswrapper[4895]: I0129 16:21:55.095202 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rfvkt"] Jan 29 16:21:55 crc kubenswrapper[4895]: W0129 16:21:55.105027 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18e923cd_60ff_4beb_8e93_52e824bfd999.slice/crio-b0e1339d670d608bd52128138ebbae4abca2682b6426f8cc90ec5a72632301f2 WatchSource:0}: Error finding container b0e1339d670d608bd52128138ebbae4abca2682b6426f8cc90ec5a72632301f2: Status 404 returned error can't find the container with id b0e1339d670d608bd52128138ebbae4abca2682b6426f8cc90ec5a72632301f2 Jan 29 16:21:55 crc kubenswrapper[4895]: W0129 16:21:55.105675 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14bf5370_0eb3_41bf_a14a_0115f945a9bb.slice/crio-8718bd338408e4bf4e1a6055d003b165c34f86183d88f5e6fd6279e46fc92612 WatchSource:0}: Error finding container 8718bd338408e4bf4e1a6055d003b165c34f86183d88f5e6fd6279e46fc92612: Status 404 returned error can't find the container with id 8718bd338408e4bf4e1a6055d003b165c34f86183d88f5e6fd6279e46fc92612 Jan 29 16:21:56 crc kubenswrapper[4895]: I0129 16:21:56.008650 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-5qzkz" event={"ID":"18e923cd-60ff-4beb-8e93-52e824bfd999","Type":"ContainerStarted","Data":"b0e1339d670d608bd52128138ebbae4abca2682b6426f8cc90ec5a72632301f2"} Jan 29 16:21:56 crc kubenswrapper[4895]: I0129 16:21:56.010561 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rfvkt" event={"ID":"14bf5370-0eb3-41bf-a14a-0115f945a9bb","Type":"ContainerStarted","Data":"8718bd338408e4bf4e1a6055d003b165c34f86183d88f5e6fd6279e46fc92612"} Jan 29 16:21:57 crc kubenswrapper[4895]: I0129 16:21:57.823132 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:21:57 crc kubenswrapper[4895]: I0129 16:21:57.823710 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:22:01 crc kubenswrapper[4895]: I0129 16:22:01.046066 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-5qzkz" event={"ID":"18e923cd-60ff-4beb-8e93-52e824bfd999","Type":"ContainerStarted","Data":"b2ce599c4b5dd44f8c75dc4fe485dcc2a1d06d658de5e3893dc0c936e078a1de"} Jan 29 16:22:01 crc kubenswrapper[4895]: I0129 16:22:01.048117 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-h2qtn" event={"ID":"15590504-595f-4b06-a0a1-5f25e83967ec","Type":"ContainerStarted","Data":"1d2e7e2d9e8c0cdc3f8988bcad2dc5b7f3143da2ddb1cfc92f2a92b39a89a5b0"} Jan 29 16:22:01 crc kubenswrapper[4895]: I0129 16:22:01.048192 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-h2qtn" Jan 29 16:22:01 crc kubenswrapper[4895]: I0129 16:22:01.050623 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rfvkt" event={"ID":"14bf5370-0eb3-41bf-a14a-0115f945a9bb","Type":"ContainerStarted","Data":"37e30c9ea45cba299dc8ce45c634ed3728ded2c18ab92754dd6036a94af72c13"} Jan 29 16:22:01 crc kubenswrapper[4895]: I0129 16:22:01.069148 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-5qzkz" podStartSLOduration=2.14246584 podStartE2EDuration="7.069116308s" podCreationTimestamp="2026-01-29 16:21:54 +0000 UTC" firstStartedPulling="2026-01-29 16:21:55.107623233 +0000 UTC m=+598.910600497" lastFinishedPulling="2026-01-29 16:22:00.034273701 +0000 UTC m=+603.837250965" observedRunningTime="2026-01-29 16:22:01.061063955 +0000 UTC m=+604.864041219" watchObservedRunningTime="2026-01-29 16:22:01.069116308 +0000 UTC m=+604.872093592" Jan 29 16:22:01 crc kubenswrapper[4895]: I0129 16:22:01.082739 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rfvkt" podStartSLOduration=2.171088938 podStartE2EDuration="7.082720305s" podCreationTimestamp="2026-01-29 16:21:54 +0000 UTC" firstStartedPulling="2026-01-29 16:21:55.112041249 +0000 UTC m=+598.915018513" lastFinishedPulling="2026-01-29 16:22:00.023672626 +0000 UTC m=+603.826649880" observedRunningTime="2026-01-29 16:22:01.081154947 +0000 UTC m=+604.884132231" watchObservedRunningTime="2026-01-29 16:22:01.082720305 +0000 UTC m=+604.885697569" Jan 29 16:22:01 crc kubenswrapper[4895]: I0129 16:22:01.134655 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-h2qtn" podStartSLOduration=2.047023493 podStartE2EDuration="7.134608548s" podCreationTimestamp="2026-01-29 16:21:54 +0000 UTC" firstStartedPulling="2026-01-29 16:21:54.968053747 +0000 UTC m=+598.771031011" lastFinishedPulling="2026-01-29 16:22:00.055638762 +0000 UTC m=+603.858616066" observedRunningTime="2026-01-29 16:22:01.127973539 +0000 UTC m=+604.930950803" watchObservedRunningTime="2026-01-29 16:22:01.134608548 +0000 UTC m=+604.937585812" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.116699 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j8c5m"] Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.117484 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovn-controller" containerID="cri-o://f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c" gracePeriod=30 Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.117895 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="sbdb" containerID="cri-o://672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace" gracePeriod=30 Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.117933 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="nbdb" containerID="cri-o://b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74" gracePeriod=30 Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.117975 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="northd" containerID="cri-o://42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3" gracePeriod=30 Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.118007 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1" gracePeriod=30 Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.118037 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="kube-rbac-proxy-node" containerID="cri-o://f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617" gracePeriod=30 Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.118065 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovn-acl-logging" containerID="cri-o://023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433" gracePeriod=30 Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.166154 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" containerID="cri-o://12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca" gracePeriod=30 Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.385467 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/3.log" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.387440 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovn-acl-logging/0.log" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.388481 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovn-controller/0.log" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.389106 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.440896 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mcg6c"] Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441119 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovn-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441133 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovn-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441144 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="kube-rbac-proxy-node" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441150 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="kube-rbac-proxy-node" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441160 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441166 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441174 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="northd" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441180 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="northd" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441186 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441192 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441202 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovn-acl-logging" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441209 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovn-acl-logging" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441218 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="nbdb" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441225 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="nbdb" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441234 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441239 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441246 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="sbdb" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441251 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="sbdb" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441261 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="kubecfg-setup" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441267 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="kubecfg-setup" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441275 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441281 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441291 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441297 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441378 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441389 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441397 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="sbdb" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441406 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441413 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovn-acl-logging" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441421 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441428 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="northd" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441433 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441440 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovn-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441447 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="nbdb" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441456 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="kube-rbac-proxy-node" Jan 29 16:22:04 crc kubenswrapper[4895]: E0129 16:22:04.441547 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441554 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.441641 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerName="ovnkube-controller" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.443173 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501192 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-netns\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501253 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-systemd-units\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501298 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-ovn\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501329 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501346 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-script-lib\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501367 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501390 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-ovn-kubernetes\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501420 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-config\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501447 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6f9b\" (UniqueName: \"kubernetes.io/projected/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-kube-api-access-b6f9b\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501439 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501462 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501473 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-env-overrides\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501564 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-var-lib-openvswitch\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501585 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-netd\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501607 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-openvswitch\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501625 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-node-log\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501642 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-etc-openvswitch\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501657 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-log-socket\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501666 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501676 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovn-node-metrics-cert\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501675 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501722 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-node-log" (OuterVolumeSpecName: "node-log") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501731 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-slash\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501750 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501758 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501777 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-slash" (OuterVolumeSpecName: "host-slash") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501786 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-systemd\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501801 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501805 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-bin\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501828 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501852 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-kubelet\") pod \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\" (UID: \"b00f5c7f-4264-4580-9c5a-ace62ee4b87d\") " Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501854 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.501977 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-log-socket" (OuterVolumeSpecName: "log-socket") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502087 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502134 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502176 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502367 4895 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502380 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502392 4895 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502401 4895 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502410 4895 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502419 4895 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502428 4895 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502436 4895 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502445 4895 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502452 4895 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502461 4895 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502470 4895 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502480 4895 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502490 4895 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502497 4895 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502507 4895 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.502853 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.507697 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.510178 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-kube-api-access-b6f9b" (OuterVolumeSpecName: "kube-api-access-b6f9b") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "kube-api-access-b6f9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.520193 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b00f5c7f-4264-4580-9c5a-ace62ee4b87d" (UID: "b00f5c7f-4264-4580-9c5a-ace62ee4b87d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603418 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-etc-openvswitch\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603499 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41349cb5-aa47-4f66-9f08-6d303bd044ea-ovn-node-metrics-cert\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603555 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-run-ovn-kubernetes\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603652 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-run-ovn\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603682 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41349cb5-aa47-4f66-9f08-6d303bd044ea-ovnkube-script-lib\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603715 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-var-lib-openvswitch\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603747 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41349cb5-aa47-4f66-9f08-6d303bd044ea-env-overrides\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603777 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-node-log\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603813 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-cni-netd\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603845 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-log-socket\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603906 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92slv\" (UniqueName: \"kubernetes.io/projected/41349cb5-aa47-4f66-9f08-6d303bd044ea-kube-api-access-92slv\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603934 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-cni-bin\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.603963 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-run-systemd\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.604008 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-run-openvswitch\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.604052 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-run-netns\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.604143 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41349cb5-aa47-4f66-9f08-6d303bd044ea-ovnkube-config\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.604204 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-slash\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.604250 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.604274 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-systemd-units\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.604295 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-kubelet\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.604383 4895 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.604415 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6f9b\" (UniqueName: \"kubernetes.io/projected/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-kube-api-access-b6f9b\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.604437 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.604456 4895 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b00f5c7f-4264-4580-9c5a-ace62ee4b87d-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706037 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-etc-openvswitch\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706125 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-etc-openvswitch\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706153 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41349cb5-aa47-4f66-9f08-6d303bd044ea-ovn-node-metrics-cert\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706204 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-run-ovn-kubernetes\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706252 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41349cb5-aa47-4f66-9f08-6d303bd044ea-ovnkube-script-lib\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706297 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-run-ovn\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-var-lib-openvswitch\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706397 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41349cb5-aa47-4f66-9f08-6d303bd044ea-env-overrides\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706443 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-node-log\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706490 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-cni-netd\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706576 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-log-socket\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706621 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92slv\" (UniqueName: \"kubernetes.io/projected/41349cb5-aa47-4f66-9f08-6d303bd044ea-kube-api-access-92slv\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706705 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-cni-bin\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706750 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-run-systemd\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706802 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-run-openvswitch\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.706962 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-run-netns\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707016 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41349cb5-aa47-4f66-9f08-6d303bd044ea-ovnkube-config\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707065 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-slash\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707127 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707172 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-log-socket\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707182 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-systemd-units\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707209 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-run-ovn-kubernetes\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707239 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-kubelet\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707325 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-run-netns\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707363 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-kubelet\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707788 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-cni-bin\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707842 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-run-systemd\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.707922 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-run-openvswitch\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.708090 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.708141 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-slash\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.708610 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/41349cb5-aa47-4f66-9f08-6d303bd044ea-ovnkube-script-lib\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.708647 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-node-log\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.708689 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-host-cni-netd\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.708713 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-run-ovn\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.708734 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-var-lib-openvswitch\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.708183 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/41349cb5-aa47-4f66-9f08-6d303bd044ea-systemd-units\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.708956 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41349cb5-aa47-4f66-9f08-6d303bd044ea-env-overrides\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.709289 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41349cb5-aa47-4f66-9f08-6d303bd044ea-ovnkube-config\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.711752 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41349cb5-aa47-4f66-9f08-6d303bd044ea-ovn-node-metrics-cert\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.737119 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92slv\" (UniqueName: \"kubernetes.io/projected/41349cb5-aa47-4f66-9f08-6d303bd044ea-kube-api-access-92slv\") pod \"ovnkube-node-mcg6c\" (UID: \"41349cb5-aa47-4f66-9f08-6d303bd044ea\") " pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:04 crc kubenswrapper[4895]: I0129 16:22:04.760481 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.076254 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovnkube-controller/3.log" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.079792 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovn-acl-logging/0.log" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.080452 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j8c5m_b00f5c7f-4264-4580-9c5a-ace62ee4b87d/ovn-controller/0.log" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081047 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca" exitCode=0 Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081086 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace" exitCode=0 Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081100 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74" exitCode=0 Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081112 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3" exitCode=0 Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081123 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1" exitCode=0 Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081135 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617" exitCode=0 Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081159 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433" exitCode=143 Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081172 4895 generic.go:334] "Generic (PLEG): container finished" podID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" containerID="f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c" exitCode=143 Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081109 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081216 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081242 4895 scope.go:117] "RemoveContainer" containerID="12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081223 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081388 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081419 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081437 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081454 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081498 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081515 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081524 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081533 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081541 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081550 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081557 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081565 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081573 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081585 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081597 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081608 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081617 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081627 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081636 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081648 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081659 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081669 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081678 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081687 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081699 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081712 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081724 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081732 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081741 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081750 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081757 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081766 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081774 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081783 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081792 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081804 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j8c5m" event={"ID":"b00f5c7f-4264-4580-9c5a-ace62ee4b87d","Type":"ContainerDied","Data":"3b7e375e6e89086852c6f5d9e2950640eaf608eeaa609351c26b9859442b8154"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081816 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081826 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081835 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081844 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081852 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081860 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081889 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081897 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081905 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.081913 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.084283 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7p5vp_dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5/kube-multus/2.log" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.090334 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7p5vp_dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5/kube-multus/1.log" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.090409 4895 generic.go:334] "Generic (PLEG): container finished" podID="dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5" containerID="660b56274f2e87987653cca9fdc4a251f69f781a488edd0d7c75ff5126604a2d" exitCode=2 Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.090529 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7p5vp" event={"ID":"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5","Type":"ContainerDied","Data":"660b56274f2e87987653cca9fdc4a251f69f781a488edd0d7c75ff5126604a2d"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.090571 4895 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.091208 4895 scope.go:117] "RemoveContainer" containerID="660b56274f2e87987653cca9fdc4a251f69f781a488edd0d7c75ff5126604a2d" Jan 29 16:22:05 crc kubenswrapper[4895]: E0129 16:22:05.091525 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-7p5vp_openshift-multus(dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5)\"" pod="openshift-multus/multus-7p5vp" podUID="dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.093246 4895 generic.go:334] "Generic (PLEG): container finished" podID="41349cb5-aa47-4f66-9f08-6d303bd044ea" containerID="1c914c8af099c82788e26e3493afd7048a037fce680ad45e846916ca6c2337ee" exitCode=0 Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.093286 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" event={"ID":"41349cb5-aa47-4f66-9f08-6d303bd044ea","Type":"ContainerDied","Data":"1c914c8af099c82788e26e3493afd7048a037fce680ad45e846916ca6c2337ee"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.093306 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" event={"ID":"41349cb5-aa47-4f66-9f08-6d303bd044ea","Type":"ContainerStarted","Data":"8d581e271e38780947106e71531d512690cf3ee0bd21012f6085bf9fc383dd15"} Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.124788 4895 scope.go:117] "RemoveContainer" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.156294 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j8c5m"] Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.161499 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j8c5m"] Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.173044 4895 scope.go:117] "RemoveContainer" containerID="672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.199164 4895 scope.go:117] "RemoveContainer" containerID="b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.223611 4895 scope.go:117] "RemoveContainer" containerID="42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.264331 4895 scope.go:117] "RemoveContainer" containerID="08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.291993 4895 scope.go:117] "RemoveContainer" containerID="f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.315371 4895 scope.go:117] "RemoveContainer" containerID="023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.332150 4895 scope.go:117] "RemoveContainer" containerID="f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.351762 4895 scope.go:117] "RemoveContainer" containerID="7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.367103 4895 scope.go:117] "RemoveContainer" containerID="12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca" Jan 29 16:22:05 crc kubenswrapper[4895]: E0129 16:22:05.367705 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca\": container with ID starting with 12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca not found: ID does not exist" containerID="12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.367740 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca"} err="failed to get container status \"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca\": rpc error: code = NotFound desc = could not find container \"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca\": container with ID starting with 12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.367763 4895 scope.go:117] "RemoveContainer" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" Jan 29 16:22:05 crc kubenswrapper[4895]: E0129 16:22:05.368096 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\": container with ID starting with ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833 not found: ID does not exist" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.368115 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833"} err="failed to get container status \"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\": rpc error: code = NotFound desc = could not find container \"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\": container with ID starting with ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.368128 4895 scope.go:117] "RemoveContainer" containerID="672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace" Jan 29 16:22:05 crc kubenswrapper[4895]: E0129 16:22:05.368415 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\": container with ID starting with 672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace not found: ID does not exist" containerID="672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.368434 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace"} err="failed to get container status \"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\": rpc error: code = NotFound desc = could not find container \"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\": container with ID starting with 672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.368449 4895 scope.go:117] "RemoveContainer" containerID="b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74" Jan 29 16:22:05 crc kubenswrapper[4895]: E0129 16:22:05.368707 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\": container with ID starting with b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74 not found: ID does not exist" containerID="b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.368731 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74"} err="failed to get container status \"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\": rpc error: code = NotFound desc = could not find container \"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\": container with ID starting with b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.368750 4895 scope.go:117] "RemoveContainer" containerID="42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3" Jan 29 16:22:05 crc kubenswrapper[4895]: E0129 16:22:05.369014 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\": container with ID starting with 42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3 not found: ID does not exist" containerID="42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.369036 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3"} err="failed to get container status \"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\": rpc error: code = NotFound desc = could not find container \"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\": container with ID starting with 42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.369051 4895 scope.go:117] "RemoveContainer" containerID="08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1" Jan 29 16:22:05 crc kubenswrapper[4895]: E0129 16:22:05.369320 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\": container with ID starting with 08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1 not found: ID does not exist" containerID="08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.369345 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1"} err="failed to get container status \"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\": rpc error: code = NotFound desc = could not find container \"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\": container with ID starting with 08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.369363 4895 scope.go:117] "RemoveContainer" containerID="f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617" Jan 29 16:22:05 crc kubenswrapper[4895]: E0129 16:22:05.369667 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\": container with ID starting with f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617 not found: ID does not exist" containerID="f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.369693 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617"} err="failed to get container status \"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\": rpc error: code = NotFound desc = could not find container \"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\": container with ID starting with f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.369711 4895 scope.go:117] "RemoveContainer" containerID="023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433" Jan 29 16:22:05 crc kubenswrapper[4895]: E0129 16:22:05.369983 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\": container with ID starting with 023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433 not found: ID does not exist" containerID="023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.370006 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433"} err="failed to get container status \"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\": rpc error: code = NotFound desc = could not find container \"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\": container with ID starting with 023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.370020 4895 scope.go:117] "RemoveContainer" containerID="f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c" Jan 29 16:22:05 crc kubenswrapper[4895]: E0129 16:22:05.370246 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\": container with ID starting with f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c not found: ID does not exist" containerID="f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.370271 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c"} err="failed to get container status \"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\": rpc error: code = NotFound desc = could not find container \"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\": container with ID starting with f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.370287 4895 scope.go:117] "RemoveContainer" containerID="7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca" Jan 29 16:22:05 crc kubenswrapper[4895]: E0129 16:22:05.370518 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\": container with ID starting with 7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca not found: ID does not exist" containerID="7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.370545 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca"} err="failed to get container status \"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\": rpc error: code = NotFound desc = could not find container \"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\": container with ID starting with 7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.370559 4895 scope.go:117] "RemoveContainer" containerID="12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.370785 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca"} err="failed to get container status \"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca\": rpc error: code = NotFound desc = could not find container \"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca\": container with ID starting with 12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.370807 4895 scope.go:117] "RemoveContainer" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.371053 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833"} err="failed to get container status \"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\": rpc error: code = NotFound desc = could not find container \"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\": container with ID starting with ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.371074 4895 scope.go:117] "RemoveContainer" containerID="672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.371277 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace"} err="failed to get container status \"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\": rpc error: code = NotFound desc = could not find container \"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\": container with ID starting with 672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.371297 4895 scope.go:117] "RemoveContainer" containerID="b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.371560 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74"} err="failed to get container status \"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\": rpc error: code = NotFound desc = could not find container \"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\": container with ID starting with b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.371582 4895 scope.go:117] "RemoveContainer" containerID="42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.371798 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3"} err="failed to get container status \"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\": rpc error: code = NotFound desc = could not find container \"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\": container with ID starting with 42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.371819 4895 scope.go:117] "RemoveContainer" containerID="08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.372174 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1"} err="failed to get container status \"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\": rpc error: code = NotFound desc = could not find container \"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\": container with ID starting with 08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.372195 4895 scope.go:117] "RemoveContainer" containerID="f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.372445 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617"} err="failed to get container status \"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\": rpc error: code = NotFound desc = could not find container \"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\": container with ID starting with f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.372470 4895 scope.go:117] "RemoveContainer" containerID="023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.372714 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433"} err="failed to get container status \"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\": rpc error: code = NotFound desc = could not find container \"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\": container with ID starting with 023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.372738 4895 scope.go:117] "RemoveContainer" containerID="f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.373003 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c"} err="failed to get container status \"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\": rpc error: code = NotFound desc = could not find container \"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\": container with ID starting with f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.373025 4895 scope.go:117] "RemoveContainer" containerID="7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.373317 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca"} err="failed to get container status \"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\": rpc error: code = NotFound desc = could not find container \"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\": container with ID starting with 7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.373334 4895 scope.go:117] "RemoveContainer" containerID="12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.373603 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca"} err="failed to get container status \"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca\": rpc error: code = NotFound desc = could not find container \"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca\": container with ID starting with 12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.373620 4895 scope.go:117] "RemoveContainer" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.373777 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833"} err="failed to get container status \"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\": rpc error: code = NotFound desc = could not find container \"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\": container with ID starting with ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.373799 4895 scope.go:117] "RemoveContainer" containerID="672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.374142 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace"} err="failed to get container status \"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\": rpc error: code = NotFound desc = could not find container \"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\": container with ID starting with 672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.374159 4895 scope.go:117] "RemoveContainer" containerID="b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.374318 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74"} err="failed to get container status \"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\": rpc error: code = NotFound desc = could not find container \"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\": container with ID starting with b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.374341 4895 scope.go:117] "RemoveContainer" containerID="42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.374555 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3"} err="failed to get container status \"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\": rpc error: code = NotFound desc = could not find container \"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\": container with ID starting with 42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.374577 4895 scope.go:117] "RemoveContainer" containerID="08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.374750 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1"} err="failed to get container status \"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\": rpc error: code = NotFound desc = could not find container \"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\": container with ID starting with 08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.374771 4895 scope.go:117] "RemoveContainer" containerID="f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.375147 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617"} err="failed to get container status \"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\": rpc error: code = NotFound desc = could not find container \"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\": container with ID starting with f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.375171 4895 scope.go:117] "RemoveContainer" containerID="023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.375415 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433"} err="failed to get container status \"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\": rpc error: code = NotFound desc = could not find container \"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\": container with ID starting with 023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.375437 4895 scope.go:117] "RemoveContainer" containerID="f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.375700 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c"} err="failed to get container status \"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\": rpc error: code = NotFound desc = could not find container \"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\": container with ID starting with f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.375721 4895 scope.go:117] "RemoveContainer" containerID="7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.376543 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca"} err="failed to get container status \"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\": rpc error: code = NotFound desc = could not find container \"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\": container with ID starting with 7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.376564 4895 scope.go:117] "RemoveContainer" containerID="12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.377079 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca"} err="failed to get container status \"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca\": rpc error: code = NotFound desc = could not find container \"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca\": container with ID starting with 12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.377155 4895 scope.go:117] "RemoveContainer" containerID="ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.377452 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833"} err="failed to get container status \"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\": rpc error: code = NotFound desc = could not find container \"ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833\": container with ID starting with ed8cfb919070de540a9394393802c4f347948b50795ee8a26633cad4627f7833 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.377518 4895 scope.go:117] "RemoveContainer" containerID="672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.377971 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace"} err="failed to get container status \"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\": rpc error: code = NotFound desc = could not find container \"672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace\": container with ID starting with 672e76fe4e4349eaaf09c104ae00e3e521708125dea20f65c4db0d1740ca6ace not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.378006 4895 scope.go:117] "RemoveContainer" containerID="b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.378264 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74"} err="failed to get container status \"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\": rpc error: code = NotFound desc = could not find container \"b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74\": container with ID starting with b312d762a5d0ef6916fcef2630a98a8e11f14529e92780af2f753717470a5b74 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.378310 4895 scope.go:117] "RemoveContainer" containerID="42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.378685 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3"} err="failed to get container status \"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\": rpc error: code = NotFound desc = could not find container \"42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3\": container with ID starting with 42cd1c9b184e5a7fc784e00e4c99738c3ab73265d79854111319fec23e4150f3 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.378738 4895 scope.go:117] "RemoveContainer" containerID="08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.378984 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1"} err="failed to get container status \"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\": rpc error: code = NotFound desc = could not find container \"08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1\": container with ID starting with 08549ce2c704730f9036374e3e610f48e67d676d4562e6279ce1c1a9e9144bf1 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.379007 4895 scope.go:117] "RemoveContainer" containerID="f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.379315 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617"} err="failed to get container status \"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\": rpc error: code = NotFound desc = could not find container \"f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617\": container with ID starting with f9e24e88fda4dfdb5bee86549075bdc1f44289098f215819d28d14afa3144617 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.379337 4895 scope.go:117] "RemoveContainer" containerID="023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.379595 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433"} err="failed to get container status \"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\": rpc error: code = NotFound desc = could not find container \"023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433\": container with ID starting with 023bcae33b6654d7eb48cb2f4390f9a13ab4ea8154d5749975cab089f4da7433 not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.379623 4895 scope.go:117] "RemoveContainer" containerID="f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.379817 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c"} err="failed to get container status \"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\": rpc error: code = NotFound desc = could not find container \"f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c\": container with ID starting with f3656fb874f9abaa95d1cd1428db30bc2f7e1a83eb63b437ecc722560b34786c not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.379838 4895 scope.go:117] "RemoveContainer" containerID="7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.380202 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca"} err="failed to get container status \"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\": rpc error: code = NotFound desc = could not find container \"7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca\": container with ID starting with 7ba61b56ad4ce240e208e4cf99187a7b938c22894b17c1939620f6b74f6691ca not found: ID does not exist" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.380250 4895 scope.go:117] "RemoveContainer" containerID="12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca" Jan 29 16:22:05 crc kubenswrapper[4895]: I0129 16:22:05.380493 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca"} err="failed to get container status \"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca\": rpc error: code = NotFound desc = could not find container \"12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca\": container with ID starting with 12e336460ad783e854dbd6684e6cc1488d17336130e90cf0357029d9615406ca not found: ID does not exist" Jan 29 16:22:06 crc kubenswrapper[4895]: I0129 16:22:06.115207 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" event={"ID":"41349cb5-aa47-4f66-9f08-6d303bd044ea","Type":"ContainerStarted","Data":"1c5b8e98e7504e7b41c882154f7fc21199cb958a25757f511b4eb919db50cb0e"} Jan 29 16:22:06 crc kubenswrapper[4895]: I0129 16:22:06.115752 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" event={"ID":"41349cb5-aa47-4f66-9f08-6d303bd044ea","Type":"ContainerStarted","Data":"355197c948936a5b00d087842a29331191461ee2fee026ea6447b653e1c7d7a1"} Jan 29 16:22:06 crc kubenswrapper[4895]: I0129 16:22:06.115778 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" event={"ID":"41349cb5-aa47-4f66-9f08-6d303bd044ea","Type":"ContainerStarted","Data":"f30b723f7b167b12c32a3ca280668ac114360454a2597d987a069e4a4a954f51"} Jan 29 16:22:06 crc kubenswrapper[4895]: I0129 16:22:06.115796 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" event={"ID":"41349cb5-aa47-4f66-9f08-6d303bd044ea","Type":"ContainerStarted","Data":"48ee59581e51111dc750fb160108b8d08e455341fc415d4d91973fb851c4a28c"} Jan 29 16:22:06 crc kubenswrapper[4895]: I0129 16:22:06.115812 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" event={"ID":"41349cb5-aa47-4f66-9f08-6d303bd044ea","Type":"ContainerStarted","Data":"da7074b7420e3e3f82a327ed5f5b7d8db07fc59700ae88bb97d63628d7bb6bb7"} Jan 29 16:22:06 crc kubenswrapper[4895]: I0129 16:22:06.115827 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" event={"ID":"41349cb5-aa47-4f66-9f08-6d303bd044ea","Type":"ContainerStarted","Data":"68669e37ae002190b2e97eda4ad39efd291ac8f7e2348a3788bf697aec8f359e"} Jan 29 16:22:07 crc kubenswrapper[4895]: I0129 16:22:07.052166 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b00f5c7f-4264-4580-9c5a-ace62ee4b87d" path="/var/lib/kubelet/pods/b00f5c7f-4264-4580-9c5a-ace62ee4b87d/volumes" Jan 29 16:22:09 crc kubenswrapper[4895]: I0129 16:22:09.140586 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" event={"ID":"41349cb5-aa47-4f66-9f08-6d303bd044ea","Type":"ContainerStarted","Data":"caea104f8e0faedbd67e0008bf2d0a21fbed7b15be32824e68fc0c05f52a4b19"} Jan 29 16:22:09 crc kubenswrapper[4895]: I0129 16:22:09.631944 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-h2qtn" Jan 29 16:22:11 crc kubenswrapper[4895]: I0129 16:22:11.167292 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" event={"ID":"41349cb5-aa47-4f66-9f08-6d303bd044ea","Type":"ContainerStarted","Data":"5f36c223cca7256b21930bfa0f5ce9bf9ed27f707be64c2354f46dd9f92904cf"} Jan 29 16:22:11 crc kubenswrapper[4895]: I0129 16:22:11.169419 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:11 crc kubenswrapper[4895]: I0129 16:22:11.169673 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:11 crc kubenswrapper[4895]: I0129 16:22:11.169794 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:11 crc kubenswrapper[4895]: I0129 16:22:11.208802 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:11 crc kubenswrapper[4895]: I0129 16:22:11.230316 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" podStartSLOduration=7.230295963 podStartE2EDuration="7.230295963s" podCreationTimestamp="2026-01-29 16:22:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:22:11.229496224 +0000 UTC m=+615.032473508" watchObservedRunningTime="2026-01-29 16:22:11.230295963 +0000 UTC m=+615.033273227" Jan 29 16:22:11 crc kubenswrapper[4895]: I0129 16:22:11.242618 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:17 crc kubenswrapper[4895]: I0129 16:22:17.042109 4895 scope.go:117] "RemoveContainer" containerID="660b56274f2e87987653cca9fdc4a251f69f781a488edd0d7c75ff5126604a2d" Jan 29 16:22:17 crc kubenswrapper[4895]: E0129 16:22:17.042804 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-7p5vp_openshift-multus(dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5)\"" pod="openshift-multus/multus-7p5vp" podUID="dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5" Jan 29 16:22:27 crc kubenswrapper[4895]: I0129 16:22:27.822857 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:22:27 crc kubenswrapper[4895]: I0129 16:22:27.823633 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:22:27 crc kubenswrapper[4895]: I0129 16:22:27.823699 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:22:27 crc kubenswrapper[4895]: I0129 16:22:27.824339 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b99c1c7666a18a4fff479ed291067c0500fca6ffc17eb2b91e878cb7ce4ad701"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:22:27 crc kubenswrapper[4895]: I0129 16:22:27.824406 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://b99c1c7666a18a4fff479ed291067c0500fca6ffc17eb2b91e878cb7ce4ad701" gracePeriod=600 Jan 29 16:22:28 crc kubenswrapper[4895]: I0129 16:22:28.287590 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="b99c1c7666a18a4fff479ed291067c0500fca6ffc17eb2b91e878cb7ce4ad701" exitCode=0 Jan 29 16:22:28 crc kubenswrapper[4895]: I0129 16:22:28.287638 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"b99c1c7666a18a4fff479ed291067c0500fca6ffc17eb2b91e878cb7ce4ad701"} Jan 29 16:22:28 crc kubenswrapper[4895]: I0129 16:22:28.288244 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"38b01ff5ef7faf80c7f2424640fb866df9e6d62369651d4360c4c301990dfde0"} Jan 29 16:22:28 crc kubenswrapper[4895]: I0129 16:22:28.288270 4895 scope.go:117] "RemoveContainer" containerID="56067da70a2eed3187e4fd8c7753eb01f974e7c0d15e185a67c2e03037c7bf90" Jan 29 16:22:32 crc kubenswrapper[4895]: I0129 16:22:32.036653 4895 scope.go:117] "RemoveContainer" containerID="660b56274f2e87987653cca9fdc4a251f69f781a488edd0d7c75ff5126604a2d" Jan 29 16:22:32 crc kubenswrapper[4895]: I0129 16:22:32.319400 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7p5vp_dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5/kube-multus/2.log" Jan 29 16:22:32 crc kubenswrapper[4895]: I0129 16:22:32.320480 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7p5vp_dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5/kube-multus/1.log" Jan 29 16:22:32 crc kubenswrapper[4895]: I0129 16:22:32.320530 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7p5vp" event={"ID":"dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5","Type":"ContainerStarted","Data":"fc7adff89e5c568062295a39cd074f8fdf2087ef839779fca9ba3fc10cf9b083"} Jan 29 16:22:34 crc kubenswrapper[4895]: I0129 16:22:34.789893 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mcg6c" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.647406 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g"] Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.649366 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.651605 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.659706 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g"] Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.684611 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.684700 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.684755 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bldq\" (UniqueName: \"kubernetes.io/projected/a9810d12-a970-4769-ae18-6147ea348121-kube-api-access-6bldq\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.786042 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.786151 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.786202 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bldq\" (UniqueName: \"kubernetes.io/projected/a9810d12-a970-4769-ae18-6147ea348121-kube-api-access-6bldq\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.786808 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.786885 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.811589 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bldq\" (UniqueName: \"kubernetes.io/projected/a9810d12-a970-4769-ae18-6147ea348121-kube-api-access-6bldq\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:53 crc kubenswrapper[4895]: I0129 16:22:53.977309 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:54 crc kubenswrapper[4895]: I0129 16:22:54.195845 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g"] Jan 29 16:22:54 crc kubenswrapper[4895]: I0129 16:22:54.475700 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" event={"ID":"a9810d12-a970-4769-ae18-6147ea348121","Type":"ContainerStarted","Data":"13a4cf17b99ee0fd84e0c100715f59837730df478dacfce596fc87aa367d91cd"} Jan 29 16:22:54 crc kubenswrapper[4895]: I0129 16:22:54.475783 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" event={"ID":"a9810d12-a970-4769-ae18-6147ea348121","Type":"ContainerStarted","Data":"f310018acfb22677775c5588f3ae6116ea4e851c9dd05671567693a1a636c8c2"} Jan 29 16:22:55 crc kubenswrapper[4895]: I0129 16:22:55.487797 4895 generic.go:334] "Generic (PLEG): container finished" podID="a9810d12-a970-4769-ae18-6147ea348121" containerID="13a4cf17b99ee0fd84e0c100715f59837730df478dacfce596fc87aa367d91cd" exitCode=0 Jan 29 16:22:55 crc kubenswrapper[4895]: I0129 16:22:55.488292 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" event={"ID":"a9810d12-a970-4769-ae18-6147ea348121","Type":"ContainerDied","Data":"13a4cf17b99ee0fd84e0c100715f59837730df478dacfce596fc87aa367d91cd"} Jan 29 16:22:57 crc kubenswrapper[4895]: I0129 16:22:57.278644 4895 scope.go:117] "RemoveContainer" containerID="35e77e0bb743439e73ccd35551646714c2b196b7377392139125244a7315e397" Jan 29 16:22:57 crc kubenswrapper[4895]: I0129 16:22:57.504217 4895 generic.go:334] "Generic (PLEG): container finished" podID="a9810d12-a970-4769-ae18-6147ea348121" containerID="f770b38547cf1083befd6074bb25ec3594799f5a787d3048602c2ad3a2f7184d" exitCode=0 Jan 29 16:22:57 crc kubenswrapper[4895]: I0129 16:22:57.504270 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" event={"ID":"a9810d12-a970-4769-ae18-6147ea348121","Type":"ContainerDied","Data":"f770b38547cf1083befd6074bb25ec3594799f5a787d3048602c2ad3a2f7184d"} Jan 29 16:22:57 crc kubenswrapper[4895]: I0129 16:22:57.506030 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7p5vp_dc6302de-6cc5-48ec-a6c2-64e86f0fdcb5/kube-multus/2.log" Jan 29 16:22:58 crc kubenswrapper[4895]: I0129 16:22:58.518148 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" event={"ID":"a9810d12-a970-4769-ae18-6147ea348121","Type":"ContainerDied","Data":"6d2b8311bcfee27919e1c8cd53ba35186a5daa2de98d0cda5a323377bc2208c9"} Jan 29 16:22:58 crc kubenswrapper[4895]: I0129 16:22:58.518474 4895 generic.go:334] "Generic (PLEG): container finished" podID="a9810d12-a970-4769-ae18-6147ea348121" containerID="6d2b8311bcfee27919e1c8cd53ba35186a5daa2de98d0cda5a323377bc2208c9" exitCode=0 Jan 29 16:22:59 crc kubenswrapper[4895]: I0129 16:22:59.867559 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:22:59 crc kubenswrapper[4895]: I0129 16:22:59.900045 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-util\") pod \"a9810d12-a970-4769-ae18-6147ea348121\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " Jan 29 16:22:59 crc kubenswrapper[4895]: I0129 16:22:59.900116 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-bundle\") pod \"a9810d12-a970-4769-ae18-6147ea348121\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " Jan 29 16:22:59 crc kubenswrapper[4895]: I0129 16:22:59.900191 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bldq\" (UniqueName: \"kubernetes.io/projected/a9810d12-a970-4769-ae18-6147ea348121-kube-api-access-6bldq\") pod \"a9810d12-a970-4769-ae18-6147ea348121\" (UID: \"a9810d12-a970-4769-ae18-6147ea348121\") " Jan 29 16:22:59 crc kubenswrapper[4895]: I0129 16:22:59.901884 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-bundle" (OuterVolumeSpecName: "bundle") pod "a9810d12-a970-4769-ae18-6147ea348121" (UID: "a9810d12-a970-4769-ae18-6147ea348121"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:59 crc kubenswrapper[4895]: I0129 16:22:59.910887 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-util" (OuterVolumeSpecName: "util") pod "a9810d12-a970-4769-ae18-6147ea348121" (UID: "a9810d12-a970-4769-ae18-6147ea348121"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:59 crc kubenswrapper[4895]: I0129 16:22:59.929755 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9810d12-a970-4769-ae18-6147ea348121-kube-api-access-6bldq" (OuterVolumeSpecName: "kube-api-access-6bldq") pod "a9810d12-a970-4769-ae18-6147ea348121" (UID: "a9810d12-a970-4769-ae18-6147ea348121"). InnerVolumeSpecName "kube-api-access-6bldq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:23:00 crc kubenswrapper[4895]: I0129 16:23:00.001236 4895 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:00 crc kubenswrapper[4895]: I0129 16:23:00.001270 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bldq\" (UniqueName: \"kubernetes.io/projected/a9810d12-a970-4769-ae18-6147ea348121-kube-api-access-6bldq\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:00 crc kubenswrapper[4895]: I0129 16:23:00.001284 4895 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9810d12-a970-4769-ae18-6147ea348121-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:00 crc kubenswrapper[4895]: I0129 16:23:00.544547 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" event={"ID":"a9810d12-a970-4769-ae18-6147ea348121","Type":"ContainerDied","Data":"f310018acfb22677775c5588f3ae6116ea4e851c9dd05671567693a1a636c8c2"} Jan 29 16:23:00 crc kubenswrapper[4895]: I0129 16:23:00.544601 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f310018acfb22677775c5588f3ae6116ea4e851c9dd05671567693a1a636c8c2" Jan 29 16:23:00 crc kubenswrapper[4895]: I0129 16:23:00.544746 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.389365 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wps6n"] Jan 29 16:23:05 crc kubenswrapper[4895]: E0129 16:23:05.391184 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9810d12-a970-4769-ae18-6147ea348121" containerName="extract" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.391206 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9810d12-a970-4769-ae18-6147ea348121" containerName="extract" Jan 29 16:23:05 crc kubenswrapper[4895]: E0129 16:23:05.391222 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9810d12-a970-4769-ae18-6147ea348121" containerName="util" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.391232 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9810d12-a970-4769-ae18-6147ea348121" containerName="util" Jan 29 16:23:05 crc kubenswrapper[4895]: E0129 16:23:05.391247 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9810d12-a970-4769-ae18-6147ea348121" containerName="pull" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.391256 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9810d12-a970-4769-ae18-6147ea348121" containerName="pull" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.391393 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9810d12-a970-4769-ae18-6147ea348121" containerName="extract" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.391981 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wps6n" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.395388 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.396908 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.397159 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rn6ww" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.417071 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wps6n"] Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.491080 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvx95\" (UniqueName: \"kubernetes.io/projected/a3a795fb-ebbd-463c-8aae-317aafd133f8-kube-api-access-qvx95\") pod \"nmstate-operator-646758c888-wps6n\" (UID: \"a3a795fb-ebbd-463c-8aae-317aafd133f8\") " pod="openshift-nmstate/nmstate-operator-646758c888-wps6n" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.592792 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvx95\" (UniqueName: \"kubernetes.io/projected/a3a795fb-ebbd-463c-8aae-317aafd133f8-kube-api-access-qvx95\") pod \"nmstate-operator-646758c888-wps6n\" (UID: \"a3a795fb-ebbd-463c-8aae-317aafd133f8\") " pod="openshift-nmstate/nmstate-operator-646758c888-wps6n" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.613385 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvx95\" (UniqueName: \"kubernetes.io/projected/a3a795fb-ebbd-463c-8aae-317aafd133f8-kube-api-access-qvx95\") pod \"nmstate-operator-646758c888-wps6n\" (UID: \"a3a795fb-ebbd-463c-8aae-317aafd133f8\") " pod="openshift-nmstate/nmstate-operator-646758c888-wps6n" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.710631 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wps6n" Jan 29 16:23:05 crc kubenswrapper[4895]: I0129 16:23:05.974246 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wps6n"] Jan 29 16:23:06 crc kubenswrapper[4895]: I0129 16:23:06.586144 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wps6n" event={"ID":"a3a795fb-ebbd-463c-8aae-317aafd133f8","Type":"ContainerStarted","Data":"c4b966f5afdaaa53473fd65b0bbf4a951cd3dd12cec6290c27b8165bfa27a243"} Jan 29 16:23:09 crc kubenswrapper[4895]: I0129 16:23:09.607723 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wps6n" event={"ID":"a3a795fb-ebbd-463c-8aae-317aafd133f8","Type":"ContainerStarted","Data":"534e079f76e0a6af58ac2966bd165105eb611c7814456c407f02c1790bdf6d2f"} Jan 29 16:23:09 crc kubenswrapper[4895]: I0129 16:23:09.626321 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-wps6n" podStartSLOduration=2.075985323 podStartE2EDuration="4.626291249s" podCreationTimestamp="2026-01-29 16:23:05 +0000 UTC" firstStartedPulling="2026-01-29 16:23:05.973955048 +0000 UTC m=+669.776932312" lastFinishedPulling="2026-01-29 16:23:08.524260974 +0000 UTC m=+672.327238238" observedRunningTime="2026-01-29 16:23:09.625016785 +0000 UTC m=+673.427994099" watchObservedRunningTime="2026-01-29 16:23:09.626291249 +0000 UTC m=+673.429268523" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.752727 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-w28gt"] Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.754163 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-w28gt" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.757245 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-zfmhk" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.767161 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr"] Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.768019 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.772104 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.794258 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-59nh2"] Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.795140 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.809777 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-w28gt"] Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.820386 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqfxf\" (UniqueName: \"kubernetes.io/projected/c943c1d4-650a-46b7-805e-d89160518569-kube-api-access-cqfxf\") pod \"nmstate-webhook-8474b5b9d8-757pr\" (UID: \"c943c1d4-650a-46b7-805e-d89160518569\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.820482 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c943c1d4-650a-46b7-805e-d89160518569-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-757pr\" (UID: \"c943c1d4-650a-46b7-805e-d89160518569\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.820791 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7g5j\" (UniqueName: \"kubernetes.io/projected/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-kube-api-access-l7g5j\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.820919 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-ovs-socket\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.820973 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-nmstate-lock\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.821035 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f69z\" (UniqueName: \"kubernetes.io/projected/48a117fe-e939-4f89-8c46-c6b16e209948-kube-api-access-4f69z\") pod \"nmstate-metrics-54757c584b-w28gt\" (UID: \"48a117fe-e939-4f89-8c46-c6b16e209948\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-w28gt" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.821113 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-dbus-socket\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.821466 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr"] Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.923342 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f69z\" (UniqueName: \"kubernetes.io/projected/48a117fe-e939-4f89-8c46-c6b16e209948-kube-api-access-4f69z\") pod \"nmstate-metrics-54757c584b-w28gt\" (UID: \"48a117fe-e939-4f89-8c46-c6b16e209948\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-w28gt" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.923442 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-dbus-socket\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.923500 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqfxf\" (UniqueName: \"kubernetes.io/projected/c943c1d4-650a-46b7-805e-d89160518569-kube-api-access-cqfxf\") pod \"nmstate-webhook-8474b5b9d8-757pr\" (UID: \"c943c1d4-650a-46b7-805e-d89160518569\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.923540 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c943c1d4-650a-46b7-805e-d89160518569-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-757pr\" (UID: \"c943c1d4-650a-46b7-805e-d89160518569\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.923721 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7g5j\" (UniqueName: \"kubernetes.io/projected/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-kube-api-access-l7g5j\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.923778 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-ovs-socket\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: E0129 16:23:13.923818 4895 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.923897 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-nmstate-lock\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.923918 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-ovs-socket\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: E0129 16:23:13.923942 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c943c1d4-650a-46b7-805e-d89160518569-tls-key-pair podName:c943c1d4-650a-46b7-805e-d89160518569 nodeName:}" failed. No retries permitted until 2026-01-29 16:23:14.423915486 +0000 UTC m=+678.226892930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c943c1d4-650a-46b7-805e-d89160518569-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-757pr" (UID: "c943c1d4-650a-46b7-805e-d89160518569") : secret "openshift-nmstate-webhook" not found Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.923824 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-nmstate-lock\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.924143 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-dbus-socket\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.957732 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7g5j\" (UniqueName: \"kubernetes.io/projected/d1ce6e0d-a5d7-4d97-b029-9580c624ff4c-kube-api-access-l7g5j\") pod \"nmstate-handler-59nh2\" (UID: \"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c\") " pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.958178 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f69z\" (UniqueName: \"kubernetes.io/projected/48a117fe-e939-4f89-8c46-c6b16e209948-kube-api-access-4f69z\") pod \"nmstate-metrics-54757c584b-w28gt\" (UID: \"48a117fe-e939-4f89-8c46-c6b16e209948\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-w28gt" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.959491 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs"] Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.960696 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.963993 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.964375 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-z8fmx" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.964570 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.966484 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqfxf\" (UniqueName: \"kubernetes.io/projected/c943c1d4-650a-46b7-805e-d89160518569-kube-api-access-cqfxf\") pod \"nmstate-webhook-8474b5b9d8-757pr\" (UID: \"c943c1d4-650a-46b7-805e-d89160518569\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" Jan 29 16:23:13 crc kubenswrapper[4895]: I0129 16:23:13.985655 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs"] Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.025524 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vxzj\" (UniqueName: \"kubernetes.io/projected/adf5441b-6337-49f1-992c-00ae9c9180b4-kube-api-access-8vxzj\") pod \"nmstate-console-plugin-7754f76f8b-cl5vs\" (UID: \"adf5441b-6337-49f1-992c-00ae9c9180b4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.025637 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/adf5441b-6337-49f1-992c-00ae9c9180b4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cl5vs\" (UID: \"adf5441b-6337-49f1-992c-00ae9c9180b4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.025713 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/adf5441b-6337-49f1-992c-00ae9c9180b4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cl5vs\" (UID: \"adf5441b-6337-49f1-992c-00ae9c9180b4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.106558 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-w28gt" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.123193 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.126760 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/adf5441b-6337-49f1-992c-00ae9c9180b4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cl5vs\" (UID: \"adf5441b-6337-49f1-992c-00ae9c9180b4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.126951 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vxzj\" (UniqueName: \"kubernetes.io/projected/adf5441b-6337-49f1-992c-00ae9c9180b4-kube-api-access-8vxzj\") pod \"nmstate-console-plugin-7754f76f8b-cl5vs\" (UID: \"adf5441b-6337-49f1-992c-00ae9c9180b4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.127029 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/adf5441b-6337-49f1-992c-00ae9c9180b4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cl5vs\" (UID: \"adf5441b-6337-49f1-992c-00ae9c9180b4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.128425 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/adf5441b-6337-49f1-992c-00ae9c9180b4-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cl5vs\" (UID: \"adf5441b-6337-49f1-992c-00ae9c9180b4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.128969 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b5847d797-n5jvw"] Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.129746 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.134532 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/adf5441b-6337-49f1-992c-00ae9c9180b4-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cl5vs\" (UID: \"adf5441b-6337-49f1-992c-00ae9c9180b4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.161615 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vxzj\" (UniqueName: \"kubernetes.io/projected/adf5441b-6337-49f1-992c-00ae9c9180b4-kube-api-access-8vxzj\") pod \"nmstate-console-plugin-7754f76f8b-cl5vs\" (UID: \"adf5441b-6337-49f1-992c-00ae9c9180b4\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.165334 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b5847d797-n5jvw"] Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.230684 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-trusted-ca-bundle\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.231029 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7beb30e9-2c06-478a-b618-8a9ce631e7f8-console-serving-cert\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.231116 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sssl\" (UniqueName: \"kubernetes.io/projected/7beb30e9-2c06-478a-b618-8a9ce631e7f8-kube-api-access-4sssl\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.231230 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-oauth-serving-cert\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.231323 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-console-config\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.231398 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7beb30e9-2c06-478a-b618-8a9ce631e7f8-console-oauth-config\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.231485 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-service-ca\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.308275 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.332426 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-console-config\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.332473 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7beb30e9-2c06-478a-b618-8a9ce631e7f8-console-oauth-config\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.332517 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-service-ca\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.332554 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-trusted-ca-bundle\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.332570 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7beb30e9-2c06-478a-b618-8a9ce631e7f8-console-serving-cert\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.332591 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sssl\" (UniqueName: \"kubernetes.io/projected/7beb30e9-2c06-478a-b618-8a9ce631e7f8-kube-api-access-4sssl\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.332639 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-oauth-serving-cert\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.333993 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-oauth-serving-cert\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.334045 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-service-ca\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.334318 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-console-config\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.334537 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7beb30e9-2c06-478a-b618-8a9ce631e7f8-trusted-ca-bundle\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.340203 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7beb30e9-2c06-478a-b618-8a9ce631e7f8-console-serving-cert\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.346488 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7beb30e9-2c06-478a-b618-8a9ce631e7f8-console-oauth-config\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.357649 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sssl\" (UniqueName: \"kubernetes.io/projected/7beb30e9-2c06-478a-b618-8a9ce631e7f8-kube-api-access-4sssl\") pod \"console-6b5847d797-n5jvw\" (UID: \"7beb30e9-2c06-478a-b618-8a9ce631e7f8\") " pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.434961 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c943c1d4-650a-46b7-805e-d89160518569-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-757pr\" (UID: \"c943c1d4-650a-46b7-805e-d89160518569\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.439608 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c943c1d4-650a-46b7-805e-d89160518569-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-757pr\" (UID: \"c943c1d4-650a-46b7-805e-d89160518569\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.511319 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.515910 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs"] Jan 29 16:23:14 crc kubenswrapper[4895]: W0129 16:23:14.533407 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadf5441b_6337_49f1_992c_00ae9c9180b4.slice/crio-47a0acdeb780a4afc187aede7ddc8e5dd115a7c4d361b8db701340c40f7fc39d WatchSource:0}: Error finding container 47a0acdeb780a4afc187aede7ddc8e5dd115a7c4d361b8db701340c40f7fc39d: Status 404 returned error can't find the container with id 47a0acdeb780a4afc187aede7ddc8e5dd115a7c4d361b8db701340c40f7fc39d Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.647018 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-59nh2" event={"ID":"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c","Type":"ContainerStarted","Data":"7bb234d46a0e41fa1a14bf0c817f30ea9ef5a44ffea1ffe58cf2cb0a510e1939"} Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.648955 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" event={"ID":"adf5441b-6337-49f1-992c-00ae9c9180b4","Type":"ContainerStarted","Data":"47a0acdeb780a4afc187aede7ddc8e5dd115a7c4d361b8db701340c40f7fc39d"} Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.682248 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-w28gt"] Jan 29 16:23:14 crc kubenswrapper[4895]: W0129 16:23:14.690284 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48a117fe_e939_4f89_8c46_c6b16e209948.slice/crio-191ac570bc1d26d8f6bd641fbff2053e8ebb86e828c53832f329779b9d2b0e74 WatchSource:0}: Error finding container 191ac570bc1d26d8f6bd641fbff2053e8ebb86e828c53832f329779b9d2b0e74: Status 404 returned error can't find the container with id 191ac570bc1d26d8f6bd641fbff2053e8ebb86e828c53832f329779b9d2b0e74 Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.714623 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" Jan 29 16:23:14 crc kubenswrapper[4895]: I0129 16:23:14.910285 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b5847d797-n5jvw"] Jan 29 16:23:15 crc kubenswrapper[4895]: I0129 16:23:15.022206 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr"] Jan 29 16:23:15 crc kubenswrapper[4895]: W0129 16:23:15.033944 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc943c1d4_650a_46b7_805e_d89160518569.slice/crio-b13d59d990bfc3e5b7de3f6b08bb50e2d0374d2ae92b9bdb0fd9840cf4323e1a WatchSource:0}: Error finding container b13d59d990bfc3e5b7de3f6b08bb50e2d0374d2ae92b9bdb0fd9840cf4323e1a: Status 404 returned error can't find the container with id b13d59d990bfc3e5b7de3f6b08bb50e2d0374d2ae92b9bdb0fd9840cf4323e1a Jan 29 16:23:15 crc kubenswrapper[4895]: I0129 16:23:15.657920 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" event={"ID":"c943c1d4-650a-46b7-805e-d89160518569","Type":"ContainerStarted","Data":"b13d59d990bfc3e5b7de3f6b08bb50e2d0374d2ae92b9bdb0fd9840cf4323e1a"} Jan 29 16:23:15 crc kubenswrapper[4895]: I0129 16:23:15.660249 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5847d797-n5jvw" event={"ID":"7beb30e9-2c06-478a-b618-8a9ce631e7f8","Type":"ContainerStarted","Data":"18ae4f40bece62bc8cafd82322d41cb37e27fe9e66cfdb513443e6418b3588f6"} Jan 29 16:23:15 crc kubenswrapper[4895]: I0129 16:23:15.660397 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b5847d797-n5jvw" event={"ID":"7beb30e9-2c06-478a-b618-8a9ce631e7f8","Type":"ContainerStarted","Data":"6bc0763e4bb4409d251f7514411dcec32a2578a70a6e39721213be38bc403944"} Jan 29 16:23:15 crc kubenswrapper[4895]: I0129 16:23:15.661739 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-w28gt" event={"ID":"48a117fe-e939-4f89-8c46-c6b16e209948","Type":"ContainerStarted","Data":"191ac570bc1d26d8f6bd641fbff2053e8ebb86e828c53832f329779b9d2b0e74"} Jan 29 16:23:15 crc kubenswrapper[4895]: I0129 16:23:15.701251 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b5847d797-n5jvw" podStartSLOduration=1.701226852 podStartE2EDuration="1.701226852s" podCreationTimestamp="2026-01-29 16:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:23:15.696623303 +0000 UTC m=+679.499600577" watchObservedRunningTime="2026-01-29 16:23:15.701226852 +0000 UTC m=+679.504204116" Jan 29 16:23:18 crc kubenswrapper[4895]: I0129 16:23:18.684415 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" event={"ID":"c943c1d4-650a-46b7-805e-d89160518569","Type":"ContainerStarted","Data":"1e739e7beff37168b2e51f9a0ce2cf7ae6487f0bcb207ed650e3e705aaf9cad1"} Jan 29 16:23:18 crc kubenswrapper[4895]: I0129 16:23:18.685127 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" Jan 29 16:23:18 crc kubenswrapper[4895]: I0129 16:23:18.686591 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-59nh2" event={"ID":"d1ce6e0d-a5d7-4d97-b029-9580c624ff4c","Type":"ContainerStarted","Data":"9f16717d8eba543abaec396cd38c0b8b43b1daade95e291ed5aab9dfadf22672"} Jan 29 16:23:18 crc kubenswrapper[4895]: I0129 16:23:18.686784 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:18 crc kubenswrapper[4895]: I0129 16:23:18.688558 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" event={"ID":"adf5441b-6337-49f1-992c-00ae9c9180b4","Type":"ContainerStarted","Data":"fc3fca0daad50cbb3906588e2aec5479e71069f72c57129e07f379c3e55e5a46"} Jan 29 16:23:18 crc kubenswrapper[4895]: I0129 16:23:18.690682 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-w28gt" event={"ID":"48a117fe-e939-4f89-8c46-c6b16e209948","Type":"ContainerStarted","Data":"27b79748a5d295406059fc9aca8a95d2bd910d73446412b2aeef8cefb5ca9af5"} Jan 29 16:23:18 crc kubenswrapper[4895]: I0129 16:23:18.708778 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" podStartSLOduration=3.070225649 podStartE2EDuration="5.708753849s" podCreationTimestamp="2026-01-29 16:23:13 +0000 UTC" firstStartedPulling="2026-01-29 16:23:15.037052237 +0000 UTC m=+678.840029501" lastFinishedPulling="2026-01-29 16:23:17.675580397 +0000 UTC m=+681.478557701" observedRunningTime="2026-01-29 16:23:18.706163502 +0000 UTC m=+682.509140776" watchObservedRunningTime="2026-01-29 16:23:18.708753849 +0000 UTC m=+682.511731143" Jan 29 16:23:18 crc kubenswrapper[4895]: I0129 16:23:18.732115 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-59nh2" podStartSLOduration=2.267955522 podStartE2EDuration="5.732087533s" podCreationTimestamp="2026-01-29 16:23:13 +0000 UTC" firstStartedPulling="2026-01-29 16:23:14.211710973 +0000 UTC m=+678.014688237" lastFinishedPulling="2026-01-29 16:23:17.675842984 +0000 UTC m=+681.478820248" observedRunningTime="2026-01-29 16:23:18.730998425 +0000 UTC m=+682.533975709" watchObservedRunningTime="2026-01-29 16:23:18.732087533 +0000 UTC m=+682.535064807" Jan 29 16:23:18 crc kubenswrapper[4895]: I0129 16:23:18.751620 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cl5vs" podStartSLOduration=2.62309883 podStartE2EDuration="5.751584387s" podCreationTimestamp="2026-01-29 16:23:13 +0000 UTC" firstStartedPulling="2026-01-29 16:23:14.536239079 +0000 UTC m=+678.339216343" lastFinishedPulling="2026-01-29 16:23:17.664724636 +0000 UTC m=+681.467701900" observedRunningTime="2026-01-29 16:23:18.747145423 +0000 UTC m=+682.550122717" watchObservedRunningTime="2026-01-29 16:23:18.751584387 +0000 UTC m=+682.554561651" Jan 29 16:23:21 crc kubenswrapper[4895]: I0129 16:23:21.714687 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-w28gt" event={"ID":"48a117fe-e939-4f89-8c46-c6b16e209948","Type":"ContainerStarted","Data":"d078a0eb292c880bc01cf75a2a730e6f6573a2c895d7f00c45425de408cab2fc"} Jan 29 16:23:21 crc kubenswrapper[4895]: I0129 16:23:21.736644 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-w28gt" podStartSLOduration=2.357276472 podStartE2EDuration="8.736617763s" podCreationTimestamp="2026-01-29 16:23:13 +0000 UTC" firstStartedPulling="2026-01-29 16:23:14.693064136 +0000 UTC m=+678.496041400" lastFinishedPulling="2026-01-29 16:23:21.072405397 +0000 UTC m=+684.875382691" observedRunningTime="2026-01-29 16:23:21.734741214 +0000 UTC m=+685.537718498" watchObservedRunningTime="2026-01-29 16:23:21.736617763 +0000 UTC m=+685.539595037" Jan 29 16:23:24 crc kubenswrapper[4895]: I0129 16:23:24.156271 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-59nh2" Jan 29 16:23:24 crc kubenswrapper[4895]: I0129 16:23:24.512222 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:24 crc kubenswrapper[4895]: I0129 16:23:24.512639 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:24 crc kubenswrapper[4895]: I0129 16:23:24.516949 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:24 crc kubenswrapper[4895]: I0129 16:23:24.742356 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b5847d797-n5jvw" Jan 29 16:23:24 crc kubenswrapper[4895]: I0129 16:23:24.799109 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-p6ck2"] Jan 29 16:23:34 crc kubenswrapper[4895]: I0129 16:23:34.725242 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-757pr" Jan 29 16:23:47 crc kubenswrapper[4895]: I0129 16:23:47.925029 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm"] Jan 29 16:23:47 crc kubenswrapper[4895]: I0129 16:23:47.928911 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:47 crc kubenswrapper[4895]: I0129 16:23:47.933337 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:23:47 crc kubenswrapper[4895]: I0129 16:23:47.942311 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm"] Jan 29 16:23:47 crc kubenswrapper[4895]: I0129 16:23:47.990974 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz68s\" (UniqueName: \"kubernetes.io/projected/498f46aa-3aec-486a-99bc-585c811a12c6-kube-api-access-fz68s\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:47 crc kubenswrapper[4895]: I0129 16:23:47.991039 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:47 crc kubenswrapper[4895]: I0129 16:23:47.991190 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:48 crc kubenswrapper[4895]: I0129 16:23:48.092993 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz68s\" (UniqueName: \"kubernetes.io/projected/498f46aa-3aec-486a-99bc-585c811a12c6-kube-api-access-fz68s\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:48 crc kubenswrapper[4895]: I0129 16:23:48.093053 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:48 crc kubenswrapper[4895]: I0129 16:23:48.093131 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:48 crc kubenswrapper[4895]: I0129 16:23:48.093709 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:48 crc kubenswrapper[4895]: I0129 16:23:48.093990 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:48 crc kubenswrapper[4895]: I0129 16:23:48.122640 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz68s\" (UniqueName: \"kubernetes.io/projected/498f46aa-3aec-486a-99bc-585c811a12c6-kube-api-access-fz68s\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:48 crc kubenswrapper[4895]: I0129 16:23:48.254036 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:48 crc kubenswrapper[4895]: I0129 16:23:48.680234 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm"] Jan 29 16:23:48 crc kubenswrapper[4895]: I0129 16:23:48.898298 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" event={"ID":"498f46aa-3aec-486a-99bc-585c811a12c6","Type":"ContainerStarted","Data":"ceb2a61b8a25b3f5ea39de97da8f97ea7c416774655743b1a3672d8417967452"} Jan 29 16:23:48 crc kubenswrapper[4895]: I0129 16:23:48.898347 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" event={"ID":"498f46aa-3aec-486a-99bc-585c811a12c6","Type":"ContainerStarted","Data":"5a03e0178270219b5a181bcc98613480a507d4dc46744a80a9d9b010eed0f3d6"} Jan 29 16:23:49 crc kubenswrapper[4895]: I0129 16:23:49.849657 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-p6ck2" podUID="6dd34441-4294-4e90-9f2d-909c5aecdff7" containerName="console" containerID="cri-o://13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30" gracePeriod=15 Jan 29 16:23:49 crc kubenswrapper[4895]: I0129 16:23:49.907708 4895 generic.go:334] "Generic (PLEG): container finished" podID="498f46aa-3aec-486a-99bc-585c811a12c6" containerID="ceb2a61b8a25b3f5ea39de97da8f97ea7c416774655743b1a3672d8417967452" exitCode=0 Jan 29 16:23:49 crc kubenswrapper[4895]: I0129 16:23:49.907789 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" event={"ID":"498f46aa-3aec-486a-99bc-585c811a12c6","Type":"ContainerDied","Data":"ceb2a61b8a25b3f5ea39de97da8f97ea7c416774655743b1a3672d8417967452"} Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.182706 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-p6ck2_6dd34441-4294-4e90-9f2d-909c5aecdff7/console/0.log" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.182778 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.226046 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nclcx\" (UniqueName: \"kubernetes.io/projected/6dd34441-4294-4e90-9f2d-909c5aecdff7-kube-api-access-nclcx\") pod \"6dd34441-4294-4e90-9f2d-909c5aecdff7\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.226126 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-oauth-config\") pod \"6dd34441-4294-4e90-9f2d-909c5aecdff7\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.226182 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-service-ca\") pod \"6dd34441-4294-4e90-9f2d-909c5aecdff7\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.227097 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-trusted-ca-bundle\") pod \"6dd34441-4294-4e90-9f2d-909c5aecdff7\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.227203 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-config\") pod \"6dd34441-4294-4e90-9f2d-909c5aecdff7\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.227285 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-oauth-serving-cert\") pod \"6dd34441-4294-4e90-9f2d-909c5aecdff7\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.227344 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-serving-cert\") pod \"6dd34441-4294-4e90-9f2d-909c5aecdff7\" (UID: \"6dd34441-4294-4e90-9f2d-909c5aecdff7\") " Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.227280 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-service-ca" (OuterVolumeSpecName: "service-ca") pod "6dd34441-4294-4e90-9f2d-909c5aecdff7" (UID: "6dd34441-4294-4e90-9f2d-909c5aecdff7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.227752 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-config" (OuterVolumeSpecName: "console-config") pod "6dd34441-4294-4e90-9f2d-909c5aecdff7" (UID: "6dd34441-4294-4e90-9f2d-909c5aecdff7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.228032 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6dd34441-4294-4e90-9f2d-909c5aecdff7" (UID: "6dd34441-4294-4e90-9f2d-909c5aecdff7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.228115 4895 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.228138 4895 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.228245 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6dd34441-4294-4e90-9f2d-909c5aecdff7" (UID: "6dd34441-4294-4e90-9f2d-909c5aecdff7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.232920 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6dd34441-4294-4e90-9f2d-909c5aecdff7" (UID: "6dd34441-4294-4e90-9f2d-909c5aecdff7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.234106 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6dd34441-4294-4e90-9f2d-909c5aecdff7" (UID: "6dd34441-4294-4e90-9f2d-909c5aecdff7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.236242 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd34441-4294-4e90-9f2d-909c5aecdff7-kube-api-access-nclcx" (OuterVolumeSpecName: "kube-api-access-nclcx") pod "6dd34441-4294-4e90-9f2d-909c5aecdff7" (UID: "6dd34441-4294-4e90-9f2d-909c5aecdff7"). InnerVolumeSpecName "kube-api-access-nclcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.330209 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nclcx\" (UniqueName: \"kubernetes.io/projected/6dd34441-4294-4e90-9f2d-909c5aecdff7-kube-api-access-nclcx\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.330253 4895 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.330269 4895 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.330281 4895 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6dd34441-4294-4e90-9f2d-909c5aecdff7-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.330295 4895 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6dd34441-4294-4e90-9f2d-909c5aecdff7-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.918397 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-p6ck2_6dd34441-4294-4e90-9f2d-909c5aecdff7/console/0.log" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.918479 4895 generic.go:334] "Generic (PLEG): container finished" podID="6dd34441-4294-4e90-9f2d-909c5aecdff7" containerID="13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30" exitCode=2 Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.918522 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p6ck2" event={"ID":"6dd34441-4294-4e90-9f2d-909c5aecdff7","Type":"ContainerDied","Data":"13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30"} Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.918561 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p6ck2" event={"ID":"6dd34441-4294-4e90-9f2d-909c5aecdff7","Type":"ContainerDied","Data":"3b8ca7f5945c3d5a02566f18377b06f53416b7a8d3ca876bdae842e5f8caf23a"} Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.918590 4895 scope.go:117] "RemoveContainer" containerID="13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.918607 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p6ck2" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.949743 4895 scope.go:117] "RemoveContainer" containerID="13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30" Jan 29 16:23:50 crc kubenswrapper[4895]: E0129 16:23:50.951056 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30\": container with ID starting with 13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30 not found: ID does not exist" containerID="13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.951100 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30"} err="failed to get container status \"13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30\": rpc error: code = NotFound desc = could not find container \"13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30\": container with ID starting with 13912a6c6a1c677d6fb63a608e64c8a14528cef4a54bae6443d81158b102de30 not found: ID does not exist" Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.961798 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-p6ck2"] Jan 29 16:23:50 crc kubenswrapper[4895]: I0129 16:23:50.969558 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-p6ck2"] Jan 29 16:23:51 crc kubenswrapper[4895]: I0129 16:23:51.050664 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd34441-4294-4e90-9f2d-909c5aecdff7" path="/var/lib/kubelet/pods/6dd34441-4294-4e90-9f2d-909c5aecdff7/volumes" Jan 29 16:23:51 crc kubenswrapper[4895]: I0129 16:23:51.931674 4895 generic.go:334] "Generic (PLEG): container finished" podID="498f46aa-3aec-486a-99bc-585c811a12c6" containerID="0831d59409919514cb469547d730144592c4a541dc5ab28693eadff39cbb9185" exitCode=0 Jan 29 16:23:51 crc kubenswrapper[4895]: I0129 16:23:51.931749 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" event={"ID":"498f46aa-3aec-486a-99bc-585c811a12c6","Type":"ContainerDied","Data":"0831d59409919514cb469547d730144592c4a541dc5ab28693eadff39cbb9185"} Jan 29 16:23:52 crc kubenswrapper[4895]: I0129 16:23:52.943296 4895 generic.go:334] "Generic (PLEG): container finished" podID="498f46aa-3aec-486a-99bc-585c811a12c6" containerID="8dc7775a394dc215c70a7df4579e98079fb784f55a602e5bcf6818070d23abdb" exitCode=0 Jan 29 16:23:52 crc kubenswrapper[4895]: I0129 16:23:52.943416 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" event={"ID":"498f46aa-3aec-486a-99bc-585c811a12c6","Type":"ContainerDied","Data":"8dc7775a394dc215c70a7df4579e98079fb784f55a602e5bcf6818070d23abdb"} Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.267167 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.296012 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz68s\" (UniqueName: \"kubernetes.io/projected/498f46aa-3aec-486a-99bc-585c811a12c6-kube-api-access-fz68s\") pod \"498f46aa-3aec-486a-99bc-585c811a12c6\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.296087 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-util\") pod \"498f46aa-3aec-486a-99bc-585c811a12c6\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.296153 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-bundle\") pod \"498f46aa-3aec-486a-99bc-585c811a12c6\" (UID: \"498f46aa-3aec-486a-99bc-585c811a12c6\") " Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.300137 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-bundle" (OuterVolumeSpecName: "bundle") pod "498f46aa-3aec-486a-99bc-585c811a12c6" (UID: "498f46aa-3aec-486a-99bc-585c811a12c6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.309619 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498f46aa-3aec-486a-99bc-585c811a12c6-kube-api-access-fz68s" (OuterVolumeSpecName: "kube-api-access-fz68s") pod "498f46aa-3aec-486a-99bc-585c811a12c6" (UID: "498f46aa-3aec-486a-99bc-585c811a12c6"). InnerVolumeSpecName "kube-api-access-fz68s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.397370 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz68s\" (UniqueName: \"kubernetes.io/projected/498f46aa-3aec-486a-99bc-585c811a12c6-kube-api-access-fz68s\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.397430 4895 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.655356 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-util" (OuterVolumeSpecName: "util") pod "498f46aa-3aec-486a-99bc-585c811a12c6" (UID: "498f46aa-3aec-486a-99bc-585c811a12c6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.701976 4895 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/498f46aa-3aec-486a-99bc-585c811a12c6-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.965860 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" event={"ID":"498f46aa-3aec-486a-99bc-585c811a12c6","Type":"ContainerDied","Data":"5a03e0178270219b5a181bcc98613480a507d4dc46744a80a9d9b010eed0f3d6"} Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.965977 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a03e0178270219b5a181bcc98613480a507d4dc46744a80a9d9b010eed0f3d6" Jan 29 16:23:54 crc kubenswrapper[4895]: I0129 16:23:54.965999 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.325657 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6"] Jan 29 16:24:03 crc kubenswrapper[4895]: E0129 16:24:03.328265 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498f46aa-3aec-486a-99bc-585c811a12c6" containerName="util" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.328365 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="498f46aa-3aec-486a-99bc-585c811a12c6" containerName="util" Jan 29 16:24:03 crc kubenswrapper[4895]: E0129 16:24:03.328476 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498f46aa-3aec-486a-99bc-585c811a12c6" containerName="pull" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.328544 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="498f46aa-3aec-486a-99bc-585c811a12c6" containerName="pull" Jan 29 16:24:03 crc kubenswrapper[4895]: E0129 16:24:03.328596 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498f46aa-3aec-486a-99bc-585c811a12c6" containerName="extract" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.328644 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="498f46aa-3aec-486a-99bc-585c811a12c6" containerName="extract" Jan 29 16:24:03 crc kubenswrapper[4895]: E0129 16:24:03.328707 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd34441-4294-4e90-9f2d-909c5aecdff7" containerName="console" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.328757 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd34441-4294-4e90-9f2d-909c5aecdff7" containerName="console" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.328960 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd34441-4294-4e90-9f2d-909c5aecdff7" containerName="console" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.329050 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="498f46aa-3aec-486a-99bc-585c811a12c6" containerName="extract" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.329684 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.332265 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.332513 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-n69w9" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.332723 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.332771 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.336100 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.359535 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6"] Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.377800 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmq5d\" (UniqueName: \"kubernetes.io/projected/2563d162-e755-47fe-9b15-4975ece29fb2-kube-api-access-mmq5d\") pod \"metallb-operator-controller-manager-b596b8569-nv8c6\" (UID: \"2563d162-e755-47fe-9b15-4975ece29fb2\") " pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.377919 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2563d162-e755-47fe-9b15-4975ece29fb2-webhook-cert\") pod \"metallb-operator-controller-manager-b596b8569-nv8c6\" (UID: \"2563d162-e755-47fe-9b15-4975ece29fb2\") " pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.378005 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2563d162-e755-47fe-9b15-4975ece29fb2-apiservice-cert\") pod \"metallb-operator-controller-manager-b596b8569-nv8c6\" (UID: \"2563d162-e755-47fe-9b15-4975ece29fb2\") " pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.479272 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2563d162-e755-47fe-9b15-4975ece29fb2-webhook-cert\") pod \"metallb-operator-controller-manager-b596b8569-nv8c6\" (UID: \"2563d162-e755-47fe-9b15-4975ece29fb2\") " pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.479338 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2563d162-e755-47fe-9b15-4975ece29fb2-apiservice-cert\") pod \"metallb-operator-controller-manager-b596b8569-nv8c6\" (UID: \"2563d162-e755-47fe-9b15-4975ece29fb2\") " pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.479413 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmq5d\" (UniqueName: \"kubernetes.io/projected/2563d162-e755-47fe-9b15-4975ece29fb2-kube-api-access-mmq5d\") pod \"metallb-operator-controller-manager-b596b8569-nv8c6\" (UID: \"2563d162-e755-47fe-9b15-4975ece29fb2\") " pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.486992 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2563d162-e755-47fe-9b15-4975ece29fb2-apiservice-cert\") pod \"metallb-operator-controller-manager-b596b8569-nv8c6\" (UID: \"2563d162-e755-47fe-9b15-4975ece29fb2\") " pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.489696 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2563d162-e755-47fe-9b15-4975ece29fb2-webhook-cert\") pod \"metallb-operator-controller-manager-b596b8569-nv8c6\" (UID: \"2563d162-e755-47fe-9b15-4975ece29fb2\") " pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.508014 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmq5d\" (UniqueName: \"kubernetes.io/projected/2563d162-e755-47fe-9b15-4975ece29fb2-kube-api-access-mmq5d\") pod \"metallb-operator-controller-manager-b596b8569-nv8c6\" (UID: \"2563d162-e755-47fe-9b15-4975ece29fb2\") " pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.649780 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.773846 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx"] Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.781090 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.783403 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.787673 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.787985 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-q86kj" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.810689 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx"] Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.885310 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e900fb6a-e8f0-4fff-8873-a109732c4bc1-webhook-cert\") pod \"metallb-operator-webhook-server-7bfcf9699b-f8jbx\" (UID: \"e900fb6a-e8f0-4fff-8873-a109732c4bc1\") " pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.885406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e900fb6a-e8f0-4fff-8873-a109732c4bc1-apiservice-cert\") pod \"metallb-operator-webhook-server-7bfcf9699b-f8jbx\" (UID: \"e900fb6a-e8f0-4fff-8873-a109732c4bc1\") " pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.885495 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h62x\" (UniqueName: \"kubernetes.io/projected/e900fb6a-e8f0-4fff-8873-a109732c4bc1-kube-api-access-6h62x\") pod \"metallb-operator-webhook-server-7bfcf9699b-f8jbx\" (UID: \"e900fb6a-e8f0-4fff-8873-a109732c4bc1\") " pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.932187 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6"] Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.987815 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e900fb6a-e8f0-4fff-8873-a109732c4bc1-webhook-cert\") pod \"metallb-operator-webhook-server-7bfcf9699b-f8jbx\" (UID: \"e900fb6a-e8f0-4fff-8873-a109732c4bc1\") " pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.988072 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e900fb6a-e8f0-4fff-8873-a109732c4bc1-apiservice-cert\") pod \"metallb-operator-webhook-server-7bfcf9699b-f8jbx\" (UID: \"e900fb6a-e8f0-4fff-8873-a109732c4bc1\") " pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.988203 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h62x\" (UniqueName: \"kubernetes.io/projected/e900fb6a-e8f0-4fff-8873-a109732c4bc1-kube-api-access-6h62x\") pod \"metallb-operator-webhook-server-7bfcf9699b-f8jbx\" (UID: \"e900fb6a-e8f0-4fff-8873-a109732c4bc1\") " pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:03 crc kubenswrapper[4895]: I0129 16:24:03.997093 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e900fb6a-e8f0-4fff-8873-a109732c4bc1-webhook-cert\") pod \"metallb-operator-webhook-server-7bfcf9699b-f8jbx\" (UID: \"e900fb6a-e8f0-4fff-8873-a109732c4bc1\") " pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:04 crc kubenswrapper[4895]: I0129 16:24:04.000751 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e900fb6a-e8f0-4fff-8873-a109732c4bc1-apiservice-cert\") pod \"metallb-operator-webhook-server-7bfcf9699b-f8jbx\" (UID: \"e900fb6a-e8f0-4fff-8873-a109732c4bc1\") " pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:04 crc kubenswrapper[4895]: I0129 16:24:04.018749 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h62x\" (UniqueName: \"kubernetes.io/projected/e900fb6a-e8f0-4fff-8873-a109732c4bc1-kube-api-access-6h62x\") pod \"metallb-operator-webhook-server-7bfcf9699b-f8jbx\" (UID: \"e900fb6a-e8f0-4fff-8873-a109732c4bc1\") " pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:04 crc kubenswrapper[4895]: I0129 16:24:04.054526 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" event={"ID":"2563d162-e755-47fe-9b15-4975ece29fb2","Type":"ContainerStarted","Data":"db9fa6ca366be8e10490b5e7e97a6e3f9f6312f6b1888eede1cbedf4f2b437ae"} Jan 29 16:24:04 crc kubenswrapper[4895]: I0129 16:24:04.107922 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:04 crc kubenswrapper[4895]: I0129 16:24:04.351627 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx"] Jan 29 16:24:04 crc kubenswrapper[4895]: W0129 16:24:04.362735 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode900fb6a_e8f0_4fff_8873_a109732c4bc1.slice/crio-3b447d51b9730e26fa3841fabf9b5e35c66884b2963e35fe34d02b1aed06d047 WatchSource:0}: Error finding container 3b447d51b9730e26fa3841fabf9b5e35c66884b2963e35fe34d02b1aed06d047: Status 404 returned error can't find the container with id 3b447d51b9730e26fa3841fabf9b5e35c66884b2963e35fe34d02b1aed06d047 Jan 29 16:24:05 crc kubenswrapper[4895]: I0129 16:24:05.063286 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" event={"ID":"e900fb6a-e8f0-4fff-8873-a109732c4bc1","Type":"ContainerStarted","Data":"3b447d51b9730e26fa3841fabf9b5e35c66884b2963e35fe34d02b1aed06d047"} Jan 29 16:24:10 crc kubenswrapper[4895]: I0129 16:24:10.143307 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" event={"ID":"2563d162-e755-47fe-9b15-4975ece29fb2","Type":"ContainerStarted","Data":"1f01aaef6dfea75d7a915ad5741df58850a025fa39d8614981b2362f14554af4"} Jan 29 16:24:10 crc kubenswrapper[4895]: I0129 16:24:10.144239 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:10 crc kubenswrapper[4895]: I0129 16:24:10.145842 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" event={"ID":"e900fb6a-e8f0-4fff-8873-a109732c4bc1","Type":"ContainerStarted","Data":"7d9a8da24367c42bdb50bb7cceb4861bb8d170f77cc98a7c5d898633bd262466"} Jan 29 16:24:10 crc kubenswrapper[4895]: I0129 16:24:10.146067 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:10 crc kubenswrapper[4895]: I0129 16:24:10.171124 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" podStartSLOduration=1.6538097550000002 podStartE2EDuration="7.171101289s" podCreationTimestamp="2026-01-29 16:24:03 +0000 UTC" firstStartedPulling="2026-01-29 16:24:03.943994473 +0000 UTC m=+727.746971737" lastFinishedPulling="2026-01-29 16:24:09.461286007 +0000 UTC m=+733.264263271" observedRunningTime="2026-01-29 16:24:10.169107536 +0000 UTC m=+733.972084800" watchObservedRunningTime="2026-01-29 16:24:10.171101289 +0000 UTC m=+733.974078543" Jan 29 16:24:10 crc kubenswrapper[4895]: I0129 16:24:10.205651 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" podStartSLOduration=2.091605283 podStartE2EDuration="7.2056288s" podCreationTimestamp="2026-01-29 16:24:03 +0000 UTC" firstStartedPulling="2026-01-29 16:24:04.3668127 +0000 UTC m=+728.169789964" lastFinishedPulling="2026-01-29 16:24:09.480836217 +0000 UTC m=+733.283813481" observedRunningTime="2026-01-29 16:24:10.203757151 +0000 UTC m=+734.006734445" watchObservedRunningTime="2026-01-29 16:24:10.2056288 +0000 UTC m=+734.008606064" Jan 29 16:24:24 crc kubenswrapper[4895]: I0129 16:24:24.129915 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7bfcf9699b-f8jbx" Jan 29 16:24:33 crc kubenswrapper[4895]: I0129 16:24:33.935117 4895 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:24:43 crc kubenswrapper[4895]: I0129 16:24:43.654591 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b596b8569-nv8c6" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.404313 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-89f9w"] Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.407830 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.409999 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nm2gg" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.410745 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.412396 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.415569 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k"] Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.416329 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.420623 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.434748 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k"] Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.456516 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx6fj\" (UniqueName: \"kubernetes.io/projected/88eaedc5-b046-4629-a360-92edd0bb09e1-kube-api-access-xx6fj\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.456565 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-frr-conf\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.456608 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-metrics\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.456626 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5sz9\" (UniqueName: \"kubernetes.io/projected/4ba6715b-7048-450e-a391-7f51a11087a2-kube-api-access-q5sz9\") pod \"frr-k8s-webhook-server-7df86c4f6c-4kq2k\" (UID: \"4ba6715b-7048-450e-a391-7f51a11087a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.456644 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-frr-sockets\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.456659 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/88eaedc5-b046-4629-a360-92edd0bb09e1-frr-startup\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.456687 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ba6715b-7048-450e-a391-7f51a11087a2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4kq2k\" (UID: \"4ba6715b-7048-450e-a391-7f51a11087a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.456709 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-reloader\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.456722 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88eaedc5-b046-4629-a360-92edd0bb09e1-metrics-certs\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.489197 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-bcds8"] Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.490154 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.492958 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.493169 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.493294 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-t7n4z" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.493465 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.516239 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-54m64"] Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.517359 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.519753 4895 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.535723 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-54m64"] Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.561109 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-metrics\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.561167 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5sz9\" (UniqueName: \"kubernetes.io/projected/4ba6715b-7048-450e-a391-7f51a11087a2-kube-api-access-q5sz9\") pod \"frr-k8s-webhook-server-7df86c4f6c-4kq2k\" (UID: \"4ba6715b-7048-450e-a391-7f51a11087a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.561200 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-frr-sockets\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.561222 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/88eaedc5-b046-4629-a360-92edd0bb09e1-frr-startup\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.561256 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ba6715b-7048-450e-a391-7f51a11087a2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4kq2k\" (UID: \"4ba6715b-7048-450e-a391-7f51a11087a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.561285 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-reloader\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.561303 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88eaedc5-b046-4629-a360-92edd0bb09e1-metrics-certs\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.561335 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx6fj\" (UniqueName: \"kubernetes.io/projected/88eaedc5-b046-4629-a360-92edd0bb09e1-kube-api-access-xx6fj\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.561365 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-frr-conf\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.561792 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-frr-conf\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.562921 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-metrics\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.563445 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-frr-sockets\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.565325 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/88eaedc5-b046-4629-a360-92edd0bb09e1-reloader\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: E0129 16:24:44.568032 4895 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 29 16:24:44 crc kubenswrapper[4895]: E0129 16:24:44.568173 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88eaedc5-b046-4629-a360-92edd0bb09e1-metrics-certs podName:88eaedc5-b046-4629-a360-92edd0bb09e1 nodeName:}" failed. No retries permitted until 2026-01-29 16:24:45.06814708 +0000 UTC m=+768.871124344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/88eaedc5-b046-4629-a360-92edd0bb09e1-metrics-certs") pod "frr-k8s-89f9w" (UID: "88eaedc5-b046-4629-a360-92edd0bb09e1") : secret "frr-k8s-certs-secret" not found Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.568410 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/88eaedc5-b046-4629-a360-92edd0bb09e1-frr-startup\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.574925 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4ba6715b-7048-450e-a391-7f51a11087a2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-4kq2k\" (UID: \"4ba6715b-7048-450e-a391-7f51a11087a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.590585 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5sz9\" (UniqueName: \"kubernetes.io/projected/4ba6715b-7048-450e-a391-7f51a11087a2-kube-api-access-q5sz9\") pod \"frr-k8s-webhook-server-7df86c4f6c-4kq2k\" (UID: \"4ba6715b-7048-450e-a391-7f51a11087a2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.613459 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx6fj\" (UniqueName: \"kubernetes.io/projected/88eaedc5-b046-4629-a360-92edd0bb09e1-kube-api-access-xx6fj\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.662514 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22149e2b-6b31-4bfe-930d-e14cf24aefb1-cert\") pod \"controller-6968d8fdc4-54m64\" (UID: \"22149e2b-6b31-4bfe-930d-e14cf24aefb1\") " pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.662582 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-467m6\" (UniqueName: \"kubernetes.io/projected/1ca74760-05c3-41f7-aafa-4e20a1021102-kube-api-access-467m6\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.662709 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-metrics-certs\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.662769 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1ca74760-05c3-41f7-aafa-4e20a1021102-metallb-excludel2\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.662800 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4nl6\" (UniqueName: \"kubernetes.io/projected/22149e2b-6b31-4bfe-930d-e14cf24aefb1-kube-api-access-b4nl6\") pod \"controller-6968d8fdc4-54m64\" (UID: \"22149e2b-6b31-4bfe-930d-e14cf24aefb1\") " pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.662905 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22149e2b-6b31-4bfe-930d-e14cf24aefb1-metrics-certs\") pod \"controller-6968d8fdc4-54m64\" (UID: \"22149e2b-6b31-4bfe-930d-e14cf24aefb1\") " pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.662929 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-memberlist\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.738390 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.763718 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-metrics-certs\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.764200 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1ca74760-05c3-41f7-aafa-4e20a1021102-metallb-excludel2\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.764230 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4nl6\" (UniqueName: \"kubernetes.io/projected/22149e2b-6b31-4bfe-930d-e14cf24aefb1-kube-api-access-b4nl6\") pod \"controller-6968d8fdc4-54m64\" (UID: \"22149e2b-6b31-4bfe-930d-e14cf24aefb1\") " pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.764307 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22149e2b-6b31-4bfe-930d-e14cf24aefb1-metrics-certs\") pod \"controller-6968d8fdc4-54m64\" (UID: \"22149e2b-6b31-4bfe-930d-e14cf24aefb1\") " pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.764327 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-memberlist\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.764357 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22149e2b-6b31-4bfe-930d-e14cf24aefb1-cert\") pod \"controller-6968d8fdc4-54m64\" (UID: \"22149e2b-6b31-4bfe-930d-e14cf24aefb1\") " pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.764387 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-467m6\" (UniqueName: \"kubernetes.io/projected/1ca74760-05c3-41f7-aafa-4e20a1021102-kube-api-access-467m6\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: E0129 16:24:44.765962 4895 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 29 16:24:44 crc kubenswrapper[4895]: E0129 16:24:44.766031 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22149e2b-6b31-4bfe-930d-e14cf24aefb1-metrics-certs podName:22149e2b-6b31-4bfe-930d-e14cf24aefb1 nodeName:}" failed. No retries permitted until 2026-01-29 16:24:45.266007289 +0000 UTC m=+769.068984553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/22149e2b-6b31-4bfe-930d-e14cf24aefb1-metrics-certs") pod "controller-6968d8fdc4-54m64" (UID: "22149e2b-6b31-4bfe-930d-e14cf24aefb1") : secret "controller-certs-secret" not found Jan 29 16:24:44 crc kubenswrapper[4895]: E0129 16:24:44.766094 4895 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:24:44 crc kubenswrapper[4895]: E0129 16:24:44.766215 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-memberlist podName:1ca74760-05c3-41f7-aafa-4e20a1021102 nodeName:}" failed. No retries permitted until 2026-01-29 16:24:45.266183454 +0000 UTC m=+769.069160778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-memberlist") pod "speaker-bcds8" (UID: "1ca74760-05c3-41f7-aafa-4e20a1021102") : secret "metallb-memberlist" not found Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.766626 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1ca74760-05c3-41f7-aafa-4e20a1021102-metallb-excludel2\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.769372 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-metrics-certs\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.769422 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22149e2b-6b31-4bfe-930d-e14cf24aefb1-cert\") pod \"controller-6968d8fdc4-54m64\" (UID: \"22149e2b-6b31-4bfe-930d-e14cf24aefb1\") " pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.782986 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-467m6\" (UniqueName: \"kubernetes.io/projected/1ca74760-05c3-41f7-aafa-4e20a1021102-kube-api-access-467m6\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:44 crc kubenswrapper[4895]: I0129 16:24:44.788777 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4nl6\" (UniqueName: \"kubernetes.io/projected/22149e2b-6b31-4bfe-930d-e14cf24aefb1-kube-api-access-b4nl6\") pod \"controller-6968d8fdc4-54m64\" (UID: \"22149e2b-6b31-4bfe-930d-e14cf24aefb1\") " pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:45 crc kubenswrapper[4895]: I0129 16:24:45.069619 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88eaedc5-b046-4629-a360-92edd0bb09e1-metrics-certs\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:45 crc kubenswrapper[4895]: I0129 16:24:45.077139 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88eaedc5-b046-4629-a360-92edd0bb09e1-metrics-certs\") pod \"frr-k8s-89f9w\" (UID: \"88eaedc5-b046-4629-a360-92edd0bb09e1\") " pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:45 crc kubenswrapper[4895]: I0129 16:24:45.272456 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22149e2b-6b31-4bfe-930d-e14cf24aefb1-metrics-certs\") pod \"controller-6968d8fdc4-54m64\" (UID: \"22149e2b-6b31-4bfe-930d-e14cf24aefb1\") " pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:45 crc kubenswrapper[4895]: I0129 16:24:45.272556 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-memberlist\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:45 crc kubenswrapper[4895]: E0129 16:24:45.272939 4895 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:24:45 crc kubenswrapper[4895]: E0129 16:24:45.273054 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-memberlist podName:1ca74760-05c3-41f7-aafa-4e20a1021102 nodeName:}" failed. No retries permitted until 2026-01-29 16:24:46.273020043 +0000 UTC m=+770.075997347 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-memberlist") pod "speaker-bcds8" (UID: "1ca74760-05c3-41f7-aafa-4e20a1021102") : secret "metallb-memberlist" not found Jan 29 16:24:45 crc kubenswrapper[4895]: I0129 16:24:45.278622 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22149e2b-6b31-4bfe-930d-e14cf24aefb1-metrics-certs\") pod \"controller-6968d8fdc4-54m64\" (UID: \"22149e2b-6b31-4bfe-930d-e14cf24aefb1\") " pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:45 crc kubenswrapper[4895]: I0129 16:24:45.412819 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:45 crc kubenswrapper[4895]: I0129 16:24:45.432103 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:45 crc kubenswrapper[4895]: I0129 16:24:45.470912 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k"] Jan 29 16:24:45 crc kubenswrapper[4895]: I0129 16:24:45.941576 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-54m64"] Jan 29 16:24:45 crc kubenswrapper[4895]: W0129 16:24:45.952257 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22149e2b_6b31_4bfe_930d_e14cf24aefb1.slice/crio-2e4dfc28eab4de5c73d44e853874f082d7ea9a19e8b258e9a874272fdef75a9d WatchSource:0}: Error finding container 2e4dfc28eab4de5c73d44e853874f082d7ea9a19e8b258e9a874272fdef75a9d: Status 404 returned error can't find the container with id 2e4dfc28eab4de5c73d44e853874f082d7ea9a19e8b258e9a874272fdef75a9d Jan 29 16:24:46 crc kubenswrapper[4895]: I0129 16:24:46.290568 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-memberlist\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:46 crc kubenswrapper[4895]: I0129 16:24:46.297165 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1ca74760-05c3-41f7-aafa-4e20a1021102-memberlist\") pod \"speaker-bcds8\" (UID: \"1ca74760-05c3-41f7-aafa-4e20a1021102\") " pod="metallb-system/speaker-bcds8" Jan 29 16:24:46 crc kubenswrapper[4895]: I0129 16:24:46.303921 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-bcds8" Jan 29 16:24:46 crc kubenswrapper[4895]: W0129 16:24:46.331496 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ca74760_05c3_41f7_aafa_4e20a1021102.slice/crio-c0dab8e59ca2c54165a7947af697e4baf59ca98b2ad918997ab0a2c804df7d79 WatchSource:0}: Error finding container c0dab8e59ca2c54165a7947af697e4baf59ca98b2ad918997ab0a2c804df7d79: Status 404 returned error can't find the container with id c0dab8e59ca2c54165a7947af697e4baf59ca98b2ad918997ab0a2c804df7d79 Jan 29 16:24:46 crc kubenswrapper[4895]: I0129 16:24:46.431224 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bcds8" event={"ID":"1ca74760-05c3-41f7-aafa-4e20a1021102","Type":"ContainerStarted","Data":"c0dab8e59ca2c54165a7947af697e4baf59ca98b2ad918997ab0a2c804df7d79"} Jan 29 16:24:46 crc kubenswrapper[4895]: I0129 16:24:46.433900 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-54m64" event={"ID":"22149e2b-6b31-4bfe-930d-e14cf24aefb1","Type":"ContainerStarted","Data":"4018bcccebac6c21f887faa1746fa5cb80648c3076d5b14db055a4ba7a2d5c96"} Jan 29 16:24:46 crc kubenswrapper[4895]: I0129 16:24:46.433970 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-54m64" event={"ID":"22149e2b-6b31-4bfe-930d-e14cf24aefb1","Type":"ContainerStarted","Data":"f68f8a13c262069cfa2ccb5301e837c893ea59392483a0efe7f02c1591f732ca"} Jan 29 16:24:46 crc kubenswrapper[4895]: I0129 16:24:46.433986 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-54m64" event={"ID":"22149e2b-6b31-4bfe-930d-e14cf24aefb1","Type":"ContainerStarted","Data":"2e4dfc28eab4de5c73d44e853874f082d7ea9a19e8b258e9a874272fdef75a9d"} Jan 29 16:24:46 crc kubenswrapper[4895]: I0129 16:24:46.434040 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:24:46 crc kubenswrapper[4895]: I0129 16:24:46.436723 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" event={"ID":"4ba6715b-7048-450e-a391-7f51a11087a2","Type":"ContainerStarted","Data":"525b931f1f4afd014d3a74e4ba090382ca5b414c6d7ca2e6626475a5e6a8af9b"} Jan 29 16:24:46 crc kubenswrapper[4895]: I0129 16:24:46.438240 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-89f9w" event={"ID":"88eaedc5-b046-4629-a360-92edd0bb09e1","Type":"ContainerStarted","Data":"8c69343f30839bbbae64d8fe629e5a1801cb0077626e6a00ae1c340140643089"} Jan 29 16:24:46 crc kubenswrapper[4895]: I0129 16:24:46.459326 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-54m64" podStartSLOduration=2.459299862 podStartE2EDuration="2.459299862s" podCreationTimestamp="2026-01-29 16:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:46.45423875 +0000 UTC m=+770.257216024" watchObservedRunningTime="2026-01-29 16:24:46.459299862 +0000 UTC m=+770.262277136" Jan 29 16:24:47 crc kubenswrapper[4895]: I0129 16:24:47.457163 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bcds8" event={"ID":"1ca74760-05c3-41f7-aafa-4e20a1021102","Type":"ContainerStarted","Data":"cc17511c769c276f10ed21f194b333d88abeb0a91f7f68af24af2398e887efba"} Jan 29 16:24:47 crc kubenswrapper[4895]: I0129 16:24:47.457705 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-bcds8" event={"ID":"1ca74760-05c3-41f7-aafa-4e20a1021102","Type":"ContainerStarted","Data":"224e952a3806ddc449eccde922d94675af24898d1977800be5a351c5f9585fd4"} Jan 29 16:24:47 crc kubenswrapper[4895]: I0129 16:24:47.481193 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-bcds8" podStartSLOduration=3.481174743 podStartE2EDuration="3.481174743s" podCreationTimestamp="2026-01-29 16:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:24:47.479734915 +0000 UTC m=+771.282712179" watchObservedRunningTime="2026-01-29 16:24:47.481174743 +0000 UTC m=+771.284152027" Jan 29 16:24:48 crc kubenswrapper[4895]: I0129 16:24:48.466202 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-bcds8" Jan 29 16:24:53 crc kubenswrapper[4895]: I0129 16:24:53.509846 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" event={"ID":"4ba6715b-7048-450e-a391-7f51a11087a2","Type":"ContainerStarted","Data":"4e6e19b7e75a8618470669d1e8959620031a7c22001cde693a12084b5c8abf64"} Jan 29 16:24:53 crc kubenswrapper[4895]: I0129 16:24:53.513316 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" Jan 29 16:24:53 crc kubenswrapper[4895]: I0129 16:24:53.515125 4895 generic.go:334] "Generic (PLEG): container finished" podID="88eaedc5-b046-4629-a360-92edd0bb09e1" containerID="8154a79b5d04d2ee5461ec2c57f3a50ba1500a49628b37c10f39db9dceff2054" exitCode=0 Jan 29 16:24:53 crc kubenswrapper[4895]: I0129 16:24:53.515264 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-89f9w" event={"ID":"88eaedc5-b046-4629-a360-92edd0bb09e1","Type":"ContainerDied","Data":"8154a79b5d04d2ee5461ec2c57f3a50ba1500a49628b37c10f39db9dceff2054"} Jan 29 16:24:53 crc kubenswrapper[4895]: I0129 16:24:53.549913 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" podStartSLOduration=2.002851018 podStartE2EDuration="9.549889918s" podCreationTimestamp="2026-01-29 16:24:44 +0000 UTC" firstStartedPulling="2026-01-29 16:24:45.489617092 +0000 UTC m=+769.292594376" lastFinishedPulling="2026-01-29 16:24:53.036656012 +0000 UTC m=+776.839633276" observedRunningTime="2026-01-29 16:24:53.547381352 +0000 UTC m=+777.350358676" watchObservedRunningTime="2026-01-29 16:24:53.549889918 +0000 UTC m=+777.352867192" Jan 29 16:24:54 crc kubenswrapper[4895]: I0129 16:24:54.524079 4895 generic.go:334] "Generic (PLEG): container finished" podID="88eaedc5-b046-4629-a360-92edd0bb09e1" containerID="04118163df267e4351ff69984f2b186cdec6276da7cc9efc274e63bd1cae7308" exitCode=0 Jan 29 16:24:54 crc kubenswrapper[4895]: I0129 16:24:54.524334 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-89f9w" event={"ID":"88eaedc5-b046-4629-a360-92edd0bb09e1","Type":"ContainerDied","Data":"04118163df267e4351ff69984f2b186cdec6276da7cc9efc274e63bd1cae7308"} Jan 29 16:24:55 crc kubenswrapper[4895]: I0129 16:24:55.534570 4895 generic.go:334] "Generic (PLEG): container finished" podID="88eaedc5-b046-4629-a360-92edd0bb09e1" containerID="e3836784336cc14a5befb020be8010aa06fac0a1d3882c37a194cab28f959f29" exitCode=0 Jan 29 16:24:55 crc kubenswrapper[4895]: I0129 16:24:55.534656 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-89f9w" event={"ID":"88eaedc5-b046-4629-a360-92edd0bb09e1","Type":"ContainerDied","Data":"e3836784336cc14a5befb020be8010aa06fac0a1d3882c37a194cab28f959f29"} Jan 29 16:24:56 crc kubenswrapper[4895]: I0129 16:24:56.308630 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-bcds8" Jan 29 16:24:56 crc kubenswrapper[4895]: I0129 16:24:56.549793 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-89f9w" event={"ID":"88eaedc5-b046-4629-a360-92edd0bb09e1","Type":"ContainerStarted","Data":"bb38cc3f5597738d524f8c292f11b5cf49cb5204cc9003b1b661af408ada1aa0"} Jan 29 16:24:56 crc kubenswrapper[4895]: I0129 16:24:56.549842 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-89f9w" event={"ID":"88eaedc5-b046-4629-a360-92edd0bb09e1","Type":"ContainerStarted","Data":"c97d7dad6a4546db06b236162a39ed15e0a75cc374ccaff48440f9be54a8f381"} Jan 29 16:24:56 crc kubenswrapper[4895]: I0129 16:24:56.549854 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-89f9w" event={"ID":"88eaedc5-b046-4629-a360-92edd0bb09e1","Type":"ContainerStarted","Data":"f75dff9031b5810f8c92c25733aba3dd7ccb8e47da0ab8e809e2e999a6695f09"} Jan 29 16:24:56 crc kubenswrapper[4895]: I0129 16:24:56.549876 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-89f9w" event={"ID":"88eaedc5-b046-4629-a360-92edd0bb09e1","Type":"ContainerStarted","Data":"f7588439903c30c4993c4a026b196a967d01cb581ffb8e9c7d89ff1df763a842"} Jan 29 16:24:56 crc kubenswrapper[4895]: I0129 16:24:56.549888 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-89f9w" event={"ID":"88eaedc5-b046-4629-a360-92edd0bb09e1","Type":"ContainerStarted","Data":"556d74718e3ec4d749e6b27e93da5ef9c40816d8e94a9e43df886b907ad94f93"} Jan 29 16:24:57 crc kubenswrapper[4895]: I0129 16:24:57.564016 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-89f9w" event={"ID":"88eaedc5-b046-4629-a360-92edd0bb09e1","Type":"ContainerStarted","Data":"74f5f04712f879060f5da487e8df0e7ad1c552a60bd645f979e1257100ad0342"} Jan 29 16:24:57 crc kubenswrapper[4895]: I0129 16:24:57.564568 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-89f9w" Jan 29 16:24:57 crc kubenswrapper[4895]: I0129 16:24:57.592647 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-89f9w" podStartSLOduration=6.300768061 podStartE2EDuration="13.592613206s" podCreationTimestamp="2026-01-29 16:24:44 +0000 UTC" firstStartedPulling="2026-01-29 16:24:45.76829668 +0000 UTC m=+769.571273964" lastFinishedPulling="2026-01-29 16:24:53.060141815 +0000 UTC m=+776.863119109" observedRunningTime="2026-01-29 16:24:57.585247337 +0000 UTC m=+781.388224621" watchObservedRunningTime="2026-01-29 16:24:57.592613206 +0000 UTC m=+781.395590470" Jan 29 16:24:57 crc kubenswrapper[4895]: I0129 16:24:57.823378 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:24:57 crc kubenswrapper[4895]: I0129 16:24:57.823472 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:24:59 crc kubenswrapper[4895]: I0129 16:24:59.287621 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vsjl4"] Jan 29 16:24:59 crc kubenswrapper[4895]: I0129 16:24:59.288961 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vsjl4" Jan 29 16:24:59 crc kubenswrapper[4895]: I0129 16:24:59.291034 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 16:24:59 crc kubenswrapper[4895]: I0129 16:24:59.291673 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 16:24:59 crc kubenswrapper[4895]: I0129 16:24:59.291787 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wf4tc" Jan 29 16:24:59 crc kubenswrapper[4895]: I0129 16:24:59.320707 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vsjl4"] Jan 29 16:24:59 crc kubenswrapper[4895]: I0129 16:24:59.409613 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv7jg\" (UniqueName: \"kubernetes.io/projected/665a2eaa-4d3b-4ec2-ab85-fe1741427ad8-kube-api-access-fv7jg\") pod \"openstack-operator-index-vsjl4\" (UID: \"665a2eaa-4d3b-4ec2-ab85-fe1741427ad8\") " pod="openstack-operators/openstack-operator-index-vsjl4" Jan 29 16:24:59 crc kubenswrapper[4895]: I0129 16:24:59.511807 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7jg\" (UniqueName: \"kubernetes.io/projected/665a2eaa-4d3b-4ec2-ab85-fe1741427ad8-kube-api-access-fv7jg\") pod \"openstack-operator-index-vsjl4\" (UID: \"665a2eaa-4d3b-4ec2-ab85-fe1741427ad8\") " pod="openstack-operators/openstack-operator-index-vsjl4" Jan 29 16:24:59 crc kubenswrapper[4895]: I0129 16:24:59.546214 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv7jg\" (UniqueName: \"kubernetes.io/projected/665a2eaa-4d3b-4ec2-ab85-fe1741427ad8-kube-api-access-fv7jg\") pod \"openstack-operator-index-vsjl4\" (UID: \"665a2eaa-4d3b-4ec2-ab85-fe1741427ad8\") " pod="openstack-operators/openstack-operator-index-vsjl4" Jan 29 16:24:59 crc kubenswrapper[4895]: I0129 16:24:59.609758 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vsjl4" Jan 29 16:24:59 crc kubenswrapper[4895]: I0129 16:24:59.860186 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vsjl4"] Jan 29 16:25:00 crc kubenswrapper[4895]: I0129 16:25:00.413804 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-89f9w" Jan 29 16:25:00 crc kubenswrapper[4895]: I0129 16:25:00.460147 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-89f9w" Jan 29 16:25:00 crc kubenswrapper[4895]: I0129 16:25:00.590283 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vsjl4" event={"ID":"665a2eaa-4d3b-4ec2-ab85-fe1741427ad8","Type":"ContainerStarted","Data":"4e75f7ef560f3933c445ad096b414b68be7ad4df54fd7f0dec3d5c706f22d6aa"} Jan 29 16:25:02 crc kubenswrapper[4895]: I0129 16:25:02.460454 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vsjl4"] Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.074126 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vd26w"] Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.075704 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vd26w" Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.101089 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vd26w"] Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.171653 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crcxb\" (UniqueName: \"kubernetes.io/projected/c1fe06a9-7c3c-4541-b7e9-ed083b22d775-kube-api-access-crcxb\") pod \"openstack-operator-index-vd26w\" (UID: \"c1fe06a9-7c3c-4541-b7e9-ed083b22d775\") " pod="openstack-operators/openstack-operator-index-vd26w" Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.273158 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crcxb\" (UniqueName: \"kubernetes.io/projected/c1fe06a9-7c3c-4541-b7e9-ed083b22d775-kube-api-access-crcxb\") pod \"openstack-operator-index-vd26w\" (UID: \"c1fe06a9-7c3c-4541-b7e9-ed083b22d775\") " pod="openstack-operators/openstack-operator-index-vd26w" Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.303787 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crcxb\" (UniqueName: \"kubernetes.io/projected/c1fe06a9-7c3c-4541-b7e9-ed083b22d775-kube-api-access-crcxb\") pod \"openstack-operator-index-vd26w\" (UID: \"c1fe06a9-7c3c-4541-b7e9-ed083b22d775\") " pod="openstack-operators/openstack-operator-index-vd26w" Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.403252 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vd26w" Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.618344 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vsjl4" event={"ID":"665a2eaa-4d3b-4ec2-ab85-fe1741427ad8","Type":"ContainerStarted","Data":"ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f"} Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.618536 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vsjl4" podUID="665a2eaa-4d3b-4ec2-ab85-fe1741427ad8" containerName="registry-server" containerID="cri-o://ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f" gracePeriod=2 Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.642300 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vsjl4" podStartSLOduration=1.232767012 podStartE2EDuration="4.642280931s" podCreationTimestamp="2026-01-29 16:24:59 +0000 UTC" firstStartedPulling="2026-01-29 16:24:59.870057425 +0000 UTC m=+783.673034689" lastFinishedPulling="2026-01-29 16:25:03.279571334 +0000 UTC m=+787.082548608" observedRunningTime="2026-01-29 16:25:03.636969447 +0000 UTC m=+787.439946741" watchObservedRunningTime="2026-01-29 16:25:03.642280931 +0000 UTC m=+787.445258195" Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.683355 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vd26w"] Jan 29 16:25:03 crc kubenswrapper[4895]: W0129 16:25:03.721234 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1fe06a9_7c3c_4541_b7e9_ed083b22d775.slice/crio-2953f69e3e9e98a202dae5a9f6f75ca3a3db488eede5cb8eef4f5fe45588c22b WatchSource:0}: Error finding container 2953f69e3e9e98a202dae5a9f6f75ca3a3db488eede5cb8eef4f5fe45588c22b: Status 404 returned error can't find the container with id 2953f69e3e9e98a202dae5a9f6f75ca3a3db488eede5cb8eef4f5fe45588c22b Jan 29 16:25:03 crc kubenswrapper[4895]: I0129 16:25:03.967457 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vsjl4" Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.086852 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv7jg\" (UniqueName: \"kubernetes.io/projected/665a2eaa-4d3b-4ec2-ab85-fe1741427ad8-kube-api-access-fv7jg\") pod \"665a2eaa-4d3b-4ec2-ab85-fe1741427ad8\" (UID: \"665a2eaa-4d3b-4ec2-ab85-fe1741427ad8\") " Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.093777 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/665a2eaa-4d3b-4ec2-ab85-fe1741427ad8-kube-api-access-fv7jg" (OuterVolumeSpecName: "kube-api-access-fv7jg") pod "665a2eaa-4d3b-4ec2-ab85-fe1741427ad8" (UID: "665a2eaa-4d3b-4ec2-ab85-fe1741427ad8"). InnerVolumeSpecName "kube-api-access-fv7jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.189146 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv7jg\" (UniqueName: \"kubernetes.io/projected/665a2eaa-4d3b-4ec2-ab85-fe1741427ad8-kube-api-access-fv7jg\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.630377 4895 generic.go:334] "Generic (PLEG): container finished" podID="665a2eaa-4d3b-4ec2-ab85-fe1741427ad8" containerID="ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f" exitCode=0 Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.630435 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vsjl4" Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.630430 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vsjl4" event={"ID":"665a2eaa-4d3b-4ec2-ab85-fe1741427ad8","Type":"ContainerDied","Data":"ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f"} Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.631067 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vsjl4" event={"ID":"665a2eaa-4d3b-4ec2-ab85-fe1741427ad8","Type":"ContainerDied","Data":"4e75f7ef560f3933c445ad096b414b68be7ad4df54fd7f0dec3d5c706f22d6aa"} Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.631115 4895 scope.go:117] "RemoveContainer" containerID="ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f" Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.633616 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vd26w" event={"ID":"c1fe06a9-7c3c-4541-b7e9-ed083b22d775","Type":"ContainerStarted","Data":"fdd8ceaa81b2b0491b6b07ae939e9a0fbe9a34dcf0dc82e658272d17ba390913"} Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.633698 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vd26w" event={"ID":"c1fe06a9-7c3c-4541-b7e9-ed083b22d775","Type":"ContainerStarted","Data":"2953f69e3e9e98a202dae5a9f6f75ca3a3db488eede5cb8eef4f5fe45588c22b"} Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.662570 4895 scope.go:117] "RemoveContainer" containerID="ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f" Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.665624 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vd26w" podStartSLOduration=1.584275623 podStartE2EDuration="1.665603903s" podCreationTimestamp="2026-01-29 16:25:03 +0000 UTC" firstStartedPulling="2026-01-29 16:25:03.725835639 +0000 UTC m=+787.528812903" lastFinishedPulling="2026-01-29 16:25:03.807163909 +0000 UTC m=+787.610141183" observedRunningTime="2026-01-29 16:25:04.661936644 +0000 UTC m=+788.464913928" watchObservedRunningTime="2026-01-29 16:25:04.665603903 +0000 UTC m=+788.468581167" Jan 29 16:25:04 crc kubenswrapper[4895]: E0129 16:25:04.666371 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f\": container with ID starting with ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f not found: ID does not exist" containerID="ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f" Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.666438 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f"} err="failed to get container status \"ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f\": rpc error: code = NotFound desc = could not find container \"ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f\": container with ID starting with ccb07fe2bda502095e72ea34d52c884078962adee38076b87a5f8d3effd8946f not found: ID does not exist" Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.690385 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vsjl4"] Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.698533 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vsjl4"] Jan 29 16:25:04 crc kubenswrapper[4895]: I0129 16:25:04.747677 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-4kq2k" Jan 29 16:25:05 crc kubenswrapper[4895]: I0129 16:25:05.047696 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="665a2eaa-4d3b-4ec2-ab85-fe1741427ad8" path="/var/lib/kubelet/pods/665a2eaa-4d3b-4ec2-ab85-fe1741427ad8/volumes" Jan 29 16:25:05 crc kubenswrapper[4895]: I0129 16:25:05.416215 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-89f9w" Jan 29 16:25:05 crc kubenswrapper[4895]: I0129 16:25:05.441971 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-54m64" Jan 29 16:25:13 crc kubenswrapper[4895]: I0129 16:25:13.404330 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-vd26w" Jan 29 16:25:13 crc kubenswrapper[4895]: I0129 16:25:13.405133 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-vd26w" Jan 29 16:25:13 crc kubenswrapper[4895]: I0129 16:25:13.451445 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-vd26w" Jan 29 16:25:13 crc kubenswrapper[4895]: I0129 16:25:13.748924 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-vd26w" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.507035 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz"] Jan 29 16:25:20 crc kubenswrapper[4895]: E0129 16:25:20.508090 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665a2eaa-4d3b-4ec2-ab85-fe1741427ad8" containerName="registry-server" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.508108 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="665a2eaa-4d3b-4ec2-ab85-fe1741427ad8" containerName="registry-server" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.508262 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="665a2eaa-4d3b-4ec2-ab85-fe1741427ad8" containerName="registry-server" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.509327 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.512144 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-dmnsc" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.517389 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz"] Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.674098 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgbqg\" (UniqueName: \"kubernetes.io/projected/fe18f98b-f291-4ce0-bd4a-52f356c5b910-kube-api-access-qgbqg\") pod \"f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.674179 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-bundle\") pod \"f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.674220 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-util\") pod \"f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.776416 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgbqg\" (UniqueName: \"kubernetes.io/projected/fe18f98b-f291-4ce0-bd4a-52f356c5b910-kube-api-access-qgbqg\") pod \"f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.776499 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-bundle\") pod \"f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.776542 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-util\") pod \"f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.777517 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-bundle\") pod \"f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.778081 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-util\") pod \"f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.811127 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgbqg\" (UniqueName: \"kubernetes.io/projected/fe18f98b-f291-4ce0-bd4a-52f356c5b910-kube-api-access-qgbqg\") pod \"f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:20 crc kubenswrapper[4895]: I0129 16:25:20.833406 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:21 crc kubenswrapper[4895]: I0129 16:25:21.079075 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz"] Jan 29 16:25:21 crc kubenswrapper[4895]: I0129 16:25:21.773457 4895 generic.go:334] "Generic (PLEG): container finished" podID="fe18f98b-f291-4ce0-bd4a-52f356c5b910" containerID="c91b51b11de93630b8008ba82bc66b6eb111bfbeb3d15236318b4b17a262525b" exitCode=0 Jan 29 16:25:21 crc kubenswrapper[4895]: I0129 16:25:21.773519 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" event={"ID":"fe18f98b-f291-4ce0-bd4a-52f356c5b910","Type":"ContainerDied","Data":"c91b51b11de93630b8008ba82bc66b6eb111bfbeb3d15236318b4b17a262525b"} Jan 29 16:25:21 crc kubenswrapper[4895]: I0129 16:25:21.773552 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" event={"ID":"fe18f98b-f291-4ce0-bd4a-52f356c5b910","Type":"ContainerStarted","Data":"1edd7e8c0b95bf94ad404d729bacea7f36396451b67c98c89321933dd1208847"} Jan 29 16:25:22 crc kubenswrapper[4895]: I0129 16:25:22.788643 4895 generic.go:334] "Generic (PLEG): container finished" podID="fe18f98b-f291-4ce0-bd4a-52f356c5b910" containerID="97095435f9ef3a10460b686c2cd7dd5b93b7e8941fb07565c73bdfb8cae9b4a5" exitCode=0 Jan 29 16:25:22 crc kubenswrapper[4895]: I0129 16:25:22.788790 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" event={"ID":"fe18f98b-f291-4ce0-bd4a-52f356c5b910","Type":"ContainerDied","Data":"97095435f9ef3a10460b686c2cd7dd5b93b7e8941fb07565c73bdfb8cae9b4a5"} Jan 29 16:25:23 crc kubenswrapper[4895]: I0129 16:25:23.807371 4895 generic.go:334] "Generic (PLEG): container finished" podID="fe18f98b-f291-4ce0-bd4a-52f356c5b910" containerID="29d4b1ea7ee046f9b69c57e5687a6b42125db91585fd2fc635bfe241f1290b8d" exitCode=0 Jan 29 16:25:23 crc kubenswrapper[4895]: I0129 16:25:23.807530 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" event={"ID":"fe18f98b-f291-4ce0-bd4a-52f356c5b910","Type":"ContainerDied","Data":"29d4b1ea7ee046f9b69c57e5687a6b42125db91585fd2fc635bfe241f1290b8d"} Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.175410 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.264526 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-util\") pod \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.264634 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-bundle\") pod \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.264681 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgbqg\" (UniqueName: \"kubernetes.io/projected/fe18f98b-f291-4ce0-bd4a-52f356c5b910-kube-api-access-qgbqg\") pod \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\" (UID: \"fe18f98b-f291-4ce0-bd4a-52f356c5b910\") " Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.265769 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-bundle" (OuterVolumeSpecName: "bundle") pod "fe18f98b-f291-4ce0-bd4a-52f356c5b910" (UID: "fe18f98b-f291-4ce0-bd4a-52f356c5b910"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.277390 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe18f98b-f291-4ce0-bd4a-52f356c5b910-kube-api-access-qgbqg" (OuterVolumeSpecName: "kube-api-access-qgbqg") pod "fe18f98b-f291-4ce0-bd4a-52f356c5b910" (UID: "fe18f98b-f291-4ce0-bd4a-52f356c5b910"). InnerVolumeSpecName "kube-api-access-qgbqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.285158 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-util" (OuterVolumeSpecName: "util") pod "fe18f98b-f291-4ce0-bd4a-52f356c5b910" (UID: "fe18f98b-f291-4ce0-bd4a-52f356c5b910"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.366277 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgbqg\" (UniqueName: \"kubernetes.io/projected/fe18f98b-f291-4ce0-bd4a-52f356c5b910-kube-api-access-qgbqg\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.366320 4895 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.366346 4895 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe18f98b-f291-4ce0-bd4a-52f356c5b910-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.837534 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" event={"ID":"fe18f98b-f291-4ce0-bd4a-52f356c5b910","Type":"ContainerDied","Data":"1edd7e8c0b95bf94ad404d729bacea7f36396451b67c98c89321933dd1208847"} Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.838087 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1edd7e8c0b95bf94ad404d729bacea7f36396451b67c98c89321933dd1208847" Jan 29 16:25:25 crc kubenswrapper[4895]: I0129 16:25:25.837819 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz" Jan 29 16:25:27 crc kubenswrapper[4895]: I0129 16:25:27.823526 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:25:27 crc kubenswrapper[4895]: I0129 16:25:27.823592 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:25:32 crc kubenswrapper[4895]: I0129 16:25:32.690615 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7"] Jan 29 16:25:32 crc kubenswrapper[4895]: E0129 16:25:32.691285 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe18f98b-f291-4ce0-bd4a-52f356c5b910" containerName="extract" Jan 29 16:25:32 crc kubenswrapper[4895]: I0129 16:25:32.691301 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe18f98b-f291-4ce0-bd4a-52f356c5b910" containerName="extract" Jan 29 16:25:32 crc kubenswrapper[4895]: E0129 16:25:32.691311 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe18f98b-f291-4ce0-bd4a-52f356c5b910" containerName="pull" Jan 29 16:25:32 crc kubenswrapper[4895]: I0129 16:25:32.691317 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe18f98b-f291-4ce0-bd4a-52f356c5b910" containerName="pull" Jan 29 16:25:32 crc kubenswrapper[4895]: E0129 16:25:32.691332 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe18f98b-f291-4ce0-bd4a-52f356c5b910" containerName="util" Jan 29 16:25:32 crc kubenswrapper[4895]: I0129 16:25:32.691339 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe18f98b-f291-4ce0-bd4a-52f356c5b910" containerName="util" Jan 29 16:25:32 crc kubenswrapper[4895]: I0129 16:25:32.691474 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe18f98b-f291-4ce0-bd4a-52f356c5b910" containerName="extract" Jan 29 16:25:32 crc kubenswrapper[4895]: I0129 16:25:32.692030 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7" Jan 29 16:25:32 crc kubenswrapper[4895]: I0129 16:25:32.694653 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-wnbc9" Jan 29 16:25:32 crc kubenswrapper[4895]: I0129 16:25:32.719037 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7"] Jan 29 16:25:32 crc kubenswrapper[4895]: I0129 16:25:32.790387 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjhdd\" (UniqueName: \"kubernetes.io/projected/f360f1f7-fabd-4268-88e2-ef3fb4e88a9b-kube-api-access-pjhdd\") pod \"openstack-operator-controller-init-f9fb88ddf-v4wc7\" (UID: \"f360f1f7-fabd-4268-88e2-ef3fb4e88a9b\") " pod="openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7" Jan 29 16:25:32 crc kubenswrapper[4895]: I0129 16:25:32.892334 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjhdd\" (UniqueName: \"kubernetes.io/projected/f360f1f7-fabd-4268-88e2-ef3fb4e88a9b-kube-api-access-pjhdd\") pod \"openstack-operator-controller-init-f9fb88ddf-v4wc7\" (UID: \"f360f1f7-fabd-4268-88e2-ef3fb4e88a9b\") " pod="openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7" Jan 29 16:25:32 crc kubenswrapper[4895]: I0129 16:25:32.914047 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjhdd\" (UniqueName: \"kubernetes.io/projected/f360f1f7-fabd-4268-88e2-ef3fb4e88a9b-kube-api-access-pjhdd\") pod \"openstack-operator-controller-init-f9fb88ddf-v4wc7\" (UID: \"f360f1f7-fabd-4268-88e2-ef3fb4e88a9b\") " pod="openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7" Jan 29 16:25:33 crc kubenswrapper[4895]: I0129 16:25:33.015174 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7" Jan 29 16:25:33 crc kubenswrapper[4895]: I0129 16:25:33.273605 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7"] Jan 29 16:25:33 crc kubenswrapper[4895]: I0129 16:25:33.913591 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7" event={"ID":"f360f1f7-fabd-4268-88e2-ef3fb4e88a9b","Type":"ContainerStarted","Data":"47dd6fa1fa97f5e499da4c812d946c43f756f2cc2c11035719d724e22b444eee"} Jan 29 16:25:38 crc kubenswrapper[4895]: I0129 16:25:38.972410 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7" event={"ID":"f360f1f7-fabd-4268-88e2-ef3fb4e88a9b","Type":"ContainerStarted","Data":"292e2e57413da3daddd69dc90cbb24efef8f4524895a8c1aee07e7bc73f73450"} Jan 29 16:25:38 crc kubenswrapper[4895]: I0129 16:25:38.972962 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7" Jan 29 16:25:39 crc kubenswrapper[4895]: I0129 16:25:39.018297 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7" podStartSLOduration=2.4261623 podStartE2EDuration="7.01827694s" podCreationTimestamp="2026-01-29 16:25:32 +0000 UTC" firstStartedPulling="2026-01-29 16:25:33.285446446 +0000 UTC m=+817.088423710" lastFinishedPulling="2026-01-29 16:25:37.877561086 +0000 UTC m=+821.680538350" observedRunningTime="2026-01-29 16:25:39.015369422 +0000 UTC m=+822.818346696" watchObservedRunningTime="2026-01-29 16:25:39.01827694 +0000 UTC m=+822.821254204" Jan 29 16:25:43 crc kubenswrapper[4895]: I0129 16:25:43.018661 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-f9fb88ddf-v4wc7" Jan 29 16:25:57 crc kubenswrapper[4895]: I0129 16:25:57.823353 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:25:57 crc kubenswrapper[4895]: I0129 16:25:57.824284 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:25:57 crc kubenswrapper[4895]: I0129 16:25:57.824370 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:25:57 crc kubenswrapper[4895]: I0129 16:25:57.825383 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38b01ff5ef7faf80c7f2424640fb866df9e6d62369651d4360c4c301990dfde0"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:25:57 crc kubenswrapper[4895]: I0129 16:25:57.825464 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://38b01ff5ef7faf80c7f2424640fb866df9e6d62369651d4360c4c301990dfde0" gracePeriod=600 Jan 29 16:25:58 crc kubenswrapper[4895]: I0129 16:25:58.140440 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="38b01ff5ef7faf80c7f2424640fb866df9e6d62369651d4360c4c301990dfde0" exitCode=0 Jan 29 16:25:58 crc kubenswrapper[4895]: I0129 16:25:58.140948 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"38b01ff5ef7faf80c7f2424640fb866df9e6d62369651d4360c4c301990dfde0"} Jan 29 16:25:58 crc kubenswrapper[4895]: I0129 16:25:58.141025 4895 scope.go:117] "RemoveContainer" containerID="b99c1c7666a18a4fff479ed291067c0500fca6ffc17eb2b91e878cb7ce4ad701" Jan 29 16:25:59 crc kubenswrapper[4895]: I0129 16:25:59.163761 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"56eae442f108da9a8c7cd978ba66ad557a49280ec8ee87651bc60ede37bf78eb"} Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.802813 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.804770 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.819287 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-lghx9" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.819504 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.820485 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.823710 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.829746 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dhl5g" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.836233 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.837194 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.841545 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-vndb2" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.850944 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.874659 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.888696 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-flr48"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.889759 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-flr48" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.897468 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxlvq\" (UniqueName: \"kubernetes.io/projected/928d4ebe-bbab-4956-9d41-a6ef3c91e62d-kube-api-access-gxlvq\") pod \"glance-operator-controller-manager-8886f4c47-flr48\" (UID: \"928d4ebe-bbab-4956-9d41-a6ef3c91e62d\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-flr48" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.897510 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4g5z\" (UniqueName: \"kubernetes.io/projected/1e62c1e2-2a44-4985-a787-ad3cfaa3ba5d-kube-api-access-r4g5z\") pod \"cinder-operator-controller-manager-bf99b56bc-929qv\" (UID: \"1e62c1e2-2a44-4985-a787-ad3cfaa3ba5d\") " pod="openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.897565 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6744n\" (UniqueName: \"kubernetes.io/projected/6377326a-b83d-43f6-bb58-fcf54eac8ac2-kube-api-access-6744n\") pod \"designate-operator-controller-manager-6d9697b7f4-l6jpv\" (UID: \"6377326a-b83d-43f6-bb58-fcf54eac8ac2\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.897586 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbkjd\" (UniqueName: \"kubernetes.io/projected/f5589d31-28a0-45e8-a3fa-9b48576c81fc-kube-api-access-sbkjd\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-6hjs8\" (UID: \"f5589d31-28a0-45e8-a3fa-9b48576c81fc\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.899714 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-rwtmq" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.926923 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-flr48"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.938454 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.939315 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.946749 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-726nr" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.948530 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.958406 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.959490 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.966832 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dsscl" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.981990 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.986887 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb"] Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.988204 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.992510 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-7xklf" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.998711 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6744n\" (UniqueName: \"kubernetes.io/projected/6377326a-b83d-43f6-bb58-fcf54eac8ac2-kube-api-access-6744n\") pod \"designate-operator-controller-manager-6d9697b7f4-l6jpv\" (UID: \"6377326a-b83d-43f6-bb58-fcf54eac8ac2\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.998758 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbkjd\" (UniqueName: \"kubernetes.io/projected/f5589d31-28a0-45e8-a3fa-9b48576c81fc-kube-api-access-sbkjd\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-6hjs8\" (UID: \"f5589d31-28a0-45e8-a3fa-9b48576c81fc\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.998818 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxlvq\" (UniqueName: \"kubernetes.io/projected/928d4ebe-bbab-4956-9d41-a6ef3c91e62d-kube-api-access-gxlvq\") pod \"glance-operator-controller-manager-8886f4c47-flr48\" (UID: \"928d4ebe-bbab-4956-9d41-a6ef3c91e62d\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-flr48" Jan 29 16:26:21 crc kubenswrapper[4895]: I0129 16:26:21.998837 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4g5z\" (UniqueName: \"kubernetes.io/projected/1e62c1e2-2a44-4985-a787-ad3cfaa3ba5d-kube-api-access-r4g5z\") pod \"cinder-operator-controller-manager-bf99b56bc-929qv\" (UID: \"1e62c1e2-2a44-4985-a787-ad3cfaa3ba5d\") " pod="openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.001937 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.003114 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.029625 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-rjqnq" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.030687 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.031391 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.056928 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4g5z\" (UniqueName: \"kubernetes.io/projected/1e62c1e2-2a44-4985-a787-ad3cfaa3ba5d-kube-api-access-r4g5z\") pod \"cinder-operator-controller-manager-bf99b56bc-929qv\" (UID: \"1e62c1e2-2a44-4985-a787-ad3cfaa3ba5d\") " pod="openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.070283 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbkjd\" (UniqueName: \"kubernetes.io/projected/f5589d31-28a0-45e8-a3fa-9b48576c81fc-kube-api-access-sbkjd\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-6hjs8\" (UID: \"f5589d31-28a0-45e8-a3fa-9b48576c81fc\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.079356 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxlvq\" (UniqueName: \"kubernetes.io/projected/928d4ebe-bbab-4956-9d41-a6ef3c91e62d-kube-api-access-gxlvq\") pod \"glance-operator-controller-manager-8886f4c47-flr48\" (UID: \"928d4ebe-bbab-4956-9d41-a6ef3c91e62d\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-flr48" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.080042 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6744n\" (UniqueName: \"kubernetes.io/projected/6377326a-b83d-43f6-bb58-fcf54eac8ac2-kube-api-access-6744n\") pod \"designate-operator-controller-manager-6d9697b7f4-l6jpv\" (UID: \"6377326a-b83d-43f6-bb58-fcf54eac8ac2\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.114987 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7xln\" (UniqueName: \"kubernetes.io/projected/0c18b9fd-01fa-4be9-be45-5ad49240591a-kube-api-access-s7xln\") pod \"heat-operator-controller-manager-69d6db494d-qwjw9\" (UID: \"0c18b9fd-01fa-4be9-be45-5ad49240591a\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.115829 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slstc\" (UniqueName: \"kubernetes.io/projected/e605b5cd-74f0-4c19-b7e7-9f726595eeb5-kube-api-access-slstc\") pod \"horizon-operator-controller-manager-5fb775575f-vwdtk\" (UID: \"e605b5cd-74f0-4c19-b7e7-9f726595eeb5\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.116005 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt7ct\" (UniqueName: \"kubernetes.io/projected/a8dbc4ca-5e45-424e-aa1f-6c7e9e24e74c-kube-api-access-vt7ct\") pod \"ironic-operator-controller-manager-5f4b8bd54d-46kvb\" (UID: \"a8dbc4ca-5e45-424e-aa1f-6c7e9e24e74c\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.130776 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.138151 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.145785 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.146770 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.159508 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gnkdl" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.160815 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.166625 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-96st6"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.167577 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.177227 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-4swd6" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.182980 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.184965 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.211793 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-flr48" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.218389 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.218446 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.218465 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-96st6"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.218603 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.219681 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.219780 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs78x\" (UniqueName: \"kubernetes.io/projected/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-kube-api-access-vs78x\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.219886 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7xln\" (UniqueName: \"kubernetes.io/projected/0c18b9fd-01fa-4be9-be45-5ad49240591a-kube-api-access-s7xln\") pod \"heat-operator-controller-manager-69d6db494d-qwjw9\" (UID: \"0c18b9fd-01fa-4be9-be45-5ad49240591a\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.219930 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slstc\" (UniqueName: \"kubernetes.io/projected/e605b5cd-74f0-4c19-b7e7-9f726595eeb5-kube-api-access-slstc\") pod \"horizon-operator-controller-manager-5fb775575f-vwdtk\" (UID: \"e605b5cd-74f0-4c19-b7e7-9f726595eeb5\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.219954 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt7ct\" (UniqueName: \"kubernetes.io/projected/a8dbc4ca-5e45-424e-aa1f-6c7e9e24e74c-kube-api-access-vt7ct\") pod \"ironic-operator-controller-manager-5f4b8bd54d-46kvb\" (UID: \"a8dbc4ca-5e45-424e-aa1f-6c7e9e24e74c\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.221501 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tdw27" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.248301 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.250637 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.260667 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4g6zl" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.271975 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt7ct\" (UniqueName: \"kubernetes.io/projected/a8dbc4ca-5e45-424e-aa1f-6c7e9e24e74c-kube-api-access-vt7ct\") pod \"ironic-operator-controller-manager-5f4b8bd54d-46kvb\" (UID: \"a8dbc4ca-5e45-424e-aa1f-6c7e9e24e74c\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.276525 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.285713 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slstc\" (UniqueName: \"kubernetes.io/projected/e605b5cd-74f0-4c19-b7e7-9f726595eeb5-kube-api-access-slstc\") pod \"horizon-operator-controller-manager-5fb775575f-vwdtk\" (UID: \"e605b5cd-74f0-4c19-b7e7-9f726595eeb5\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.301550 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7xln\" (UniqueName: \"kubernetes.io/projected/0c18b9fd-01fa-4be9-be45-5ad49240591a-kube-api-access-s7xln\") pod \"heat-operator-controller-manager-69d6db494d-qwjw9\" (UID: \"0c18b9fd-01fa-4be9-be45-5ad49240591a\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.321277 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlzq7\" (UniqueName: \"kubernetes.io/projected/53136333-31ce-4a3c-9477-0dde82bc7ec0-kube-api-access-qlzq7\") pod \"mariadb-operator-controller-manager-67bf948998-l6b5x\" (UID: \"53136333-31ce-4a3c-9477-0dde82bc7ec0\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.321818 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtwbx\" (UniqueName: \"kubernetes.io/projected/0458d64d-6cee-41f7-bb2d-17fe71893b95-kube-api-access-dtwbx\") pod \"manila-operator-controller-manager-7dd968899f-96st6\" (UID: \"0458d64d-6cee-41f7-bb2d-17fe71893b95\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.321988 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.322100 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n2lp\" (UniqueName: \"kubernetes.io/projected/2fb0cdc6-64b5-432f-a998-26174db87dbb-kube-api-access-2n2lp\") pod \"keystone-operator-controller-manager-84f48565d4-29rf4\" (UID: \"2fb0cdc6-64b5-432f-a998-26174db87dbb\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.322187 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs78x\" (UniqueName: \"kubernetes.io/projected/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-kube-api-access-vs78x\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:22 crc kubenswrapper[4895]: E0129 16:26:22.322755 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:22 crc kubenswrapper[4895]: E0129 16:26:22.331342 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert podName:5a7c85c3-b835-48f5-99ca-2c2949ab85bf nodeName:}" failed. No retries permitted until 2026-01-29 16:26:22.831299836 +0000 UTC m=+866.634277100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert") pod "infra-operator-controller-manager-79955696d6-pd4hv" (UID: "5a7c85c3-b835-48f5-99ca-2c2949ab85bf") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.331794 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.334206 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.339102 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.341658 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs78x\" (UniqueName: \"kubernetes.io/projected/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-kube-api-access-vs78x\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.375335 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gtpb5" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.397704 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.425188 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlzq7\" (UniqueName: \"kubernetes.io/projected/53136333-31ce-4a3c-9477-0dde82bc7ec0-kube-api-access-qlzq7\") pod \"mariadb-operator-controller-manager-67bf948998-l6b5x\" (UID: \"53136333-31ce-4a3c-9477-0dde82bc7ec0\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.426228 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqgjl\" (UniqueName: \"kubernetes.io/projected/14c9beca-1f3d-42cb-91d2-f7e391a9761a-kube-api-access-kqgjl\") pod \"nova-operator-controller-manager-55bff696bd-cmhwf\" (UID: \"14c9beca-1f3d-42cb-91d2-f7e391a9761a\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.426347 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtwbx\" (UniqueName: \"kubernetes.io/projected/0458d64d-6cee-41f7-bb2d-17fe71893b95-kube-api-access-dtwbx\") pod \"manila-operator-controller-manager-7dd968899f-96st6\" (UID: \"0458d64d-6cee-41f7-bb2d-17fe71893b95\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.426435 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckb96\" (UniqueName: \"kubernetes.io/projected/ebaae8a3-53e7-4aec-88cb-9723acd3350d-kube-api-access-ckb96\") pod \"neutron-operator-controller-manager-585dbc889-4v7lq\" (UID: \"ebaae8a3-53e7-4aec-88cb-9723acd3350d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.426575 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n2lp\" (UniqueName: \"kubernetes.io/projected/2fb0cdc6-64b5-432f-a998-26174db87dbb-kube-api-access-2n2lp\") pod \"keystone-operator-controller-manager-84f48565d4-29rf4\" (UID: \"2fb0cdc6-64b5-432f-a998-26174db87dbb\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.427294 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.456198 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.457256 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.461411 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtwbx\" (UniqueName: \"kubernetes.io/projected/0458d64d-6cee-41f7-bb2d-17fe71893b95-kube-api-access-dtwbx\") pod \"manila-operator-controller-manager-7dd968899f-96st6\" (UID: \"0458d64d-6cee-41f7-bb2d-17fe71893b95\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.462629 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-2bfgw" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.468020 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n2lp\" (UniqueName: \"kubernetes.io/projected/2fb0cdc6-64b5-432f-a998-26174db87dbb-kube-api-access-2n2lp\") pod \"keystone-operator-controller-manager-84f48565d4-29rf4\" (UID: \"2fb0cdc6-64b5-432f-a998-26174db87dbb\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.472958 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.478790 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlzq7\" (UniqueName: \"kubernetes.io/projected/53136333-31ce-4a3c-9477-0dde82bc7ec0-kube-api-access-qlzq7\") pod \"mariadb-operator-controller-manager-67bf948998-l6b5x\" (UID: \"53136333-31ce-4a3c-9477-0dde82bc7ec0\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.485949 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.486880 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.489985 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dmprf" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.490955 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.501947 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.503237 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.503346 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.508158 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.511108 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-d4fmk" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.519015 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.520517 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.529155 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqgjl\" (UniqueName: \"kubernetes.io/projected/14c9beca-1f3d-42cb-91d2-f7e391a9761a-kube-api-access-kqgjl\") pod \"nova-operator-controller-manager-55bff696bd-cmhwf\" (UID: \"14c9beca-1f3d-42cb-91d2-f7e391a9761a\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.529212 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckb96\" (UniqueName: \"kubernetes.io/projected/ebaae8a3-53e7-4aec-88cb-9723acd3350d-kube-api-access-ckb96\") pod \"neutron-operator-controller-manager-585dbc889-4v7lq\" (UID: \"ebaae8a3-53e7-4aec-88cb-9723acd3350d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.529259 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsmf8\" (UniqueName: \"kubernetes.io/projected/dbd2491c-2587-47d3-8201-26b8e68bfcb7-kube-api-access-lsmf8\") pod \"placement-operator-controller-manager-5b964cf4cd-b4wtv\" (UID: \"dbd2491c-2587-47d3-8201-26b8e68bfcb7\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.532760 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8wlmb" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.542185 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.555088 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.556071 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.556542 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckb96\" (UniqueName: \"kubernetes.io/projected/ebaae8a3-53e7-4aec-88cb-9723acd3350d-kube-api-access-ckb96\") pod \"neutron-operator-controller-manager-585dbc889-4v7lq\" (UID: \"ebaae8a3-53e7-4aec-88cb-9723acd3350d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.557742 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqgjl\" (UniqueName: \"kubernetes.io/projected/14c9beca-1f3d-42cb-91d2-f7e391a9761a-kube-api-access-kqgjl\") pod \"nova-operator-controller-manager-55bff696bd-cmhwf\" (UID: \"14c9beca-1f3d-42cb-91d2-f7e391a9761a\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.558946 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-sjwjl" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.564417 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.567498 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.568377 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.576383 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.576456 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-m4cmm" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.577226 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.587656 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.595443 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.616025 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.619327 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.620221 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.661165 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.662404 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-g27qf" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.680083 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsmf8\" (UniqueName: \"kubernetes.io/projected/dbd2491c-2587-47d3-8201-26b8e68bfcb7-kube-api-access-lsmf8\") pod \"placement-operator-controller-manager-5b964cf4cd-b4wtv\" (UID: \"dbd2491c-2587-47d3-8201-26b8e68bfcb7\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.680137 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.680200 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl8hq\" (UniqueName: \"kubernetes.io/projected/826ca63d-ce7b-4d52-9fb5-31bdbb523416-kube-api-access-dl8hq\") pod \"octavia-operator-controller-manager-6687f8d877-jndgk\" (UID: \"826ca63d-ce7b-4d52-9fb5-31bdbb523416\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.680248 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk746\" (UniqueName: \"kubernetes.io/projected/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-kube-api-access-zk746\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.680295 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4xqm\" (UniqueName: \"kubernetes.io/projected/15381bb6-d539-48b4-976c-5b2a27fa7aaa-kube-api-access-q4xqm\") pod \"ovn-operator-controller-manager-788c46999f-2xzrq\" (UID: \"15381bb6-d539-48b4-976c-5b2a27fa7aaa\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.689205 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.734237 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsmf8\" (UniqueName: \"kubernetes.io/projected/dbd2491c-2587-47d3-8201-26b8e68bfcb7-kube-api-access-lsmf8\") pod \"placement-operator-controller-manager-5b964cf4cd-b4wtv\" (UID: \"dbd2491c-2587-47d3-8201-26b8e68bfcb7\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.750141 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-vcgf8"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.751405 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-vcgf8"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.751520 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.758038 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.759562 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6ppst" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.783724 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl8hq\" (UniqueName: \"kubernetes.io/projected/826ca63d-ce7b-4d52-9fb5-31bdbb523416-kube-api-access-dl8hq\") pod \"octavia-operator-controller-manager-6687f8d877-jndgk\" (UID: \"826ca63d-ce7b-4d52-9fb5-31bdbb523416\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.783796 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsxst\" (UniqueName: \"kubernetes.io/projected/9730658e-c8ca-4448-a3c0-68116c92840f-kube-api-access-xsxst\") pod \"telemetry-operator-controller-manager-64b5b76f97-8vvcr\" (UID: \"9730658e-c8ca-4448-a3c0-68116c92840f\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.783941 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk746\" (UniqueName: \"kubernetes.io/projected/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-kube-api-access-zk746\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.784063 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdskb\" (UniqueName: \"kubernetes.io/projected/3c7eb208-8a46-49ed-8efd-1b0fceabd3c8-kube-api-access-cdskb\") pod \"swift-operator-controller-manager-68fc8c869-5z28c\" (UID: \"3c7eb208-8a46-49ed-8efd-1b0fceabd3c8\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.784103 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4xqm\" (UniqueName: \"kubernetes.io/projected/15381bb6-d539-48b4-976c-5b2a27fa7aaa-kube-api-access-q4xqm\") pod \"ovn-operator-controller-manager-788c46999f-2xzrq\" (UID: \"15381bb6-d539-48b4-976c-5b2a27fa7aaa\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.785926 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8z9h\" (UniqueName: \"kubernetes.io/projected/91b335a1-04fb-48e6-bf93-8bce6c4da648-kube-api-access-x8z9h\") pod \"test-operator-controller-manager-56f8bfcd9f-f7b6w\" (UID: \"91b335a1-04fb-48e6-bf93-8bce6c4da648\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.786031 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:22 crc kubenswrapper[4895]: E0129 16:26:22.787293 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:22 crc kubenswrapper[4895]: E0129 16:26:22.787408 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert podName:861c2b0f-fa28-408a-b270-a7e1f9ee57e2 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:23.287387451 +0000 UTC m=+867.090364715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" (UID: "861c2b0f-fa28-408a-b270-a7e1f9ee57e2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.812784 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk746\" (UniqueName: \"kubernetes.io/projected/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-kube-api-access-zk746\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.814154 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl8hq\" (UniqueName: \"kubernetes.io/projected/826ca63d-ce7b-4d52-9fb5-31bdbb523416-kube-api-access-dl8hq\") pod \"octavia-operator-controller-manager-6687f8d877-jndgk\" (UID: \"826ca63d-ce7b-4d52-9fb5-31bdbb523416\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.823802 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.826897 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.828283 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4xqm\" (UniqueName: \"kubernetes.io/projected/15381bb6-d539-48b4-976c-5b2a27fa7aaa-kube-api-access-q4xqm\") pod \"ovn-operator-controller-manager-788c46999f-2xzrq\" (UID: \"15381bb6-d539-48b4-976c-5b2a27fa7aaa\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.833242 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.833453 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jvscc" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.833407 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.834850 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.842070 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.887194 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.887251 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2dxr\" (UniqueName: \"kubernetes.io/projected/d3b8a7bf-6741-4bfe-8835-e942e688098d-kube-api-access-k2dxr\") pod \"watcher-operator-controller-manager-564965969-vcgf8\" (UID: \"d3b8a7bf-6741-4bfe-8835-e942e688098d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.887292 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.887322 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdskb\" (UniqueName: \"kubernetes.io/projected/3c7eb208-8a46-49ed-8efd-1b0fceabd3c8-kube-api-access-cdskb\") pod \"swift-operator-controller-manager-68fc8c869-5z28c\" (UID: \"3c7eb208-8a46-49ed-8efd-1b0fceabd3c8\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.887358 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr4j8\" (UniqueName: \"kubernetes.io/projected/7c8841b5-eefc-4ce3-bb5b-111a252e4316-kube-api-access-lr4j8\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.887393 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8z9h\" (UniqueName: \"kubernetes.io/projected/91b335a1-04fb-48e6-bf93-8bce6c4da648-kube-api-access-x8z9h\") pod \"test-operator-controller-manager-56f8bfcd9f-f7b6w\" (UID: \"91b335a1-04fb-48e6-bf93-8bce6c4da648\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.887433 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.887464 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsxst\" (UniqueName: \"kubernetes.io/projected/9730658e-c8ca-4448-a3c0-68116c92840f-kube-api-access-xsxst\") pod \"telemetry-operator-controller-manager-64b5b76f97-8vvcr\" (UID: \"9730658e-c8ca-4448-a3c0-68116c92840f\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr" Jan 29 16:26:22 crc kubenswrapper[4895]: E0129 16:26:22.887986 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:22 crc kubenswrapper[4895]: E0129 16:26:22.888035 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert podName:5a7c85c3-b835-48f5-99ca-2c2949ab85bf nodeName:}" failed. No retries permitted until 2026-01-29 16:26:23.888018402 +0000 UTC m=+867.690995666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert") pod "infra-operator-controller-manager-79955696d6-pd4hv" (UID: "5a7c85c3-b835-48f5-99ca-2c2949ab85bf") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.893992 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.895538 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.896766 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.910975 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bhncf" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.912376 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsxst\" (UniqueName: \"kubernetes.io/projected/9730658e-c8ca-4448-a3c0-68116c92840f-kube-api-access-xsxst\") pod \"telemetry-operator-controller-manager-64b5b76f97-8vvcr\" (UID: \"9730658e-c8ca-4448-a3c0-68116c92840f\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.912452 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4"] Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.924521 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8z9h\" (UniqueName: \"kubernetes.io/projected/91b335a1-04fb-48e6-bf93-8bce6c4da648-kube-api-access-x8z9h\") pod \"test-operator-controller-manager-56f8bfcd9f-f7b6w\" (UID: \"91b335a1-04fb-48e6-bf93-8bce6c4da648\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.947404 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdskb\" (UniqueName: \"kubernetes.io/projected/3c7eb208-8a46-49ed-8efd-1b0fceabd3c8-kube-api-access-cdskb\") pod \"swift-operator-controller-manager-68fc8c869-5z28c\" (UID: \"3c7eb208-8a46-49ed-8efd-1b0fceabd3c8\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.962481 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.990497 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.990830 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr4j8\" (UniqueName: \"kubernetes.io/projected/7c8841b5-eefc-4ce3-bb5b-111a252e4316-kube-api-access-lr4j8\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.990897 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtmtg\" (UniqueName: \"kubernetes.io/projected/7b0b2050-5cdb-44e0-a858-bf6aa331d2c6-kube-api-access-dtmtg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dbmk4\" (UID: \"7b0b2050-5cdb-44e0-a858-bf6aa331d2c6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.990960 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:22 crc kubenswrapper[4895]: I0129 16:26:22.990999 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2dxr\" (UniqueName: \"kubernetes.io/projected/d3b8a7bf-6741-4bfe-8835-e942e688098d-kube-api-access-k2dxr\") pod \"watcher-operator-controller-manager-564965969-vcgf8\" (UID: \"d3b8a7bf-6741-4bfe-8835-e942e688098d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" Jan 29 16:26:22 crc kubenswrapper[4895]: E0129 16:26:22.990622 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:26:22 crc kubenswrapper[4895]: E0129 16:26:22.991203 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:23.491181419 +0000 UTC m=+867.294158673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "metrics-server-cert" not found Jan 29 16:26:22 crc kubenswrapper[4895]: E0129 16:26:22.991250 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:26:22 crc kubenswrapper[4895]: E0129 16:26:22.991286 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:23.491272452 +0000 UTC m=+867.294249726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "webhook-server-cert" not found Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:22.997998 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv"] Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.019054 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv"] Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.019747 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr4j8\" (UniqueName: \"kubernetes.io/projected/7c8841b5-eefc-4ce3-bb5b-111a252e4316-kube-api-access-lr4j8\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.026521 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2dxr\" (UniqueName: \"kubernetes.io/projected/d3b8a7bf-6741-4bfe-8835-e942e688098d-kube-api-access-k2dxr\") pod \"watcher-operator-controller-manager-564965969-vcgf8\" (UID: \"d3b8a7bf-6741-4bfe-8835-e942e688098d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.082220 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.092855 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr" Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.093287 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtmtg\" (UniqueName: \"kubernetes.io/projected/7b0b2050-5cdb-44e0-a858-bf6aa331d2c6-kube-api-access-dtmtg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dbmk4\" (UID: \"7b0b2050-5cdb-44e0-a858-bf6aa331d2c6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4" Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.131246 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.140657 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtmtg\" (UniqueName: \"kubernetes.io/projected/7b0b2050-5cdb-44e0-a858-bf6aa331d2c6-kube-api-access-dtmtg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dbmk4\" (UID: \"7b0b2050-5cdb-44e0-a858-bf6aa331d2c6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4" Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.159308 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.224126 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4" Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.296278 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:23 crc kubenswrapper[4895]: E0129 16:26:23.296484 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:23 crc kubenswrapper[4895]: E0129 16:26:23.296581 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert podName:861c2b0f-fa28-408a-b270-a7e1f9ee57e2 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:24.296551063 +0000 UTC m=+868.099528387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" (UID: "861c2b0f-fa28-408a-b270-a7e1f9ee57e2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.312557 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk"] Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.354189 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv" event={"ID":"1e62c1e2-2a44-4985-a787-ad3cfaa3ba5d","Type":"ContainerStarted","Data":"e8af310d1185d4dd7a9c1d0d92c010e495212f11f9027d7ac91a284bf166213b"} Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.362724 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk" event={"ID":"e605b5cd-74f0-4c19-b7e7-9f726595eeb5","Type":"ContainerStarted","Data":"68f9cbff9901868daf62ad18d3da65bf490931707ebc8e525e97fb2ab87e8c64"} Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.369055 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv" event={"ID":"6377326a-b83d-43f6-bb58-fcf54eac8ac2","Type":"ContainerStarted","Data":"e8d659259ec653a59bd146e09aaaff903b18722e3c3c2dabe40d215f70e3d0ff"} Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.387011 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-flr48"] Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.391924 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8"] Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.411296 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb"] Jan 29 16:26:23 crc kubenswrapper[4895]: W0129 16:26:23.436835 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8dbc4ca_5e45_424e_aa1f_6c7e9e24e74c.slice/crio-5dc03640e0a2c7f3b10623f074c0f48b592366ca585e1a65389b851d5cd26c3b WatchSource:0}: Error finding container 5dc03640e0a2c7f3b10623f074c0f48b592366ca585e1a65389b851d5cd26c3b: Status 404 returned error can't find the container with id 5dc03640e0a2c7f3b10623f074c0f48b592366ca585e1a65389b851d5cd26c3b Jan 29 16:26:23 crc kubenswrapper[4895]: W0129 16:26:23.454152 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5589d31_28a0_45e8_a3fa_9b48576c81fc.slice/crio-7b47cc8f79aed250460c2b88377c3cbfe568f38dcdfcdbfa1cf10ebfe64f6c04 WatchSource:0}: Error finding container 7b47cc8f79aed250460c2b88377c3cbfe568f38dcdfcdbfa1cf10ebfe64f6c04: Status 404 returned error can't find the container with id 7b47cc8f79aed250460c2b88377c3cbfe568f38dcdfcdbfa1cf10ebfe64f6c04 Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.499019 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.499111 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:23 crc kubenswrapper[4895]: E0129 16:26:23.499297 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:26:23 crc kubenswrapper[4895]: E0129 16:26:23.499357 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:24.499339016 +0000 UTC m=+868.302316280 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "metrics-server-cert" not found Jan 29 16:26:23 crc kubenswrapper[4895]: E0129 16:26:23.499381 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:26:23 crc kubenswrapper[4895]: E0129 16:26:23.499493 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:24.499471199 +0000 UTC m=+868.302448463 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "webhook-server-cert" not found Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.867752 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x"] Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.874571 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq"] Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.880429 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf"] Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.886987 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9"] Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.895802 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-96st6"] Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.910558 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:23 crc kubenswrapper[4895]: E0129 16:26:23.910835 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:23 crc kubenswrapper[4895]: E0129 16:26:23.910937 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert podName:5a7c85c3-b835-48f5-99ca-2c2949ab85bf nodeName:}" failed. No retries permitted until 2026-01-29 16:26:25.910909939 +0000 UTC m=+869.713887203 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert") pod "infra-operator-controller-manager-79955696d6-pd4hv" (UID: "5a7c85c3-b835-48f5-99ca-2c2949ab85bf") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.914275 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq"] Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.936478 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4"] Jan 29 16:26:23 crc kubenswrapper[4895]: W0129 16:26:23.938785 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebaae8a3_53e7_4aec_88cb_9723acd3350d.slice/crio-794b68741ecfc420335840dc884ffb6342d5e53ec1358a4cc8e6bf724bcf7606 WatchSource:0}: Error finding container 794b68741ecfc420335840dc884ffb6342d5e53ec1358a4cc8e6bf724bcf7606: Status 404 returned error can't find the container with id 794b68741ecfc420335840dc884ffb6342d5e53ec1358a4cc8e6bf724bcf7606 Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.976244 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w"] Jan 29 16:26:23 crc kubenswrapper[4895]: W0129 16:26:23.976387 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod826ca63d_ce7b_4d52_9fb5_31bdbb523416.slice/crio-aa7f45a640b194a0367318e4665eff2b434c8fd6331ece59a60437deb529c15e WatchSource:0}: Error finding container aa7f45a640b194a0367318e4665eff2b434c8fd6331ece59a60437deb529c15e: Status 404 returned error can't find the container with id aa7f45a640b194a0367318e4665eff2b434c8fd6331ece59a60437deb529c15e Jan 29 16:26:23 crc kubenswrapper[4895]: W0129 16:26:23.981200 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15381bb6_d539_48b4_976c_5b2a27fa7aaa.slice/crio-ba0649700d6a4c71651919e5614732c02cee434a14a2c02a4cbb94f6968f9b76 WatchSource:0}: Error finding container ba0649700d6a4c71651919e5614732c02cee434a14a2c02a4cbb94f6968f9b76: Status 404 returned error can't find the container with id ba0649700d6a4c71651919e5614732c02cee434a14a2c02a4cbb94f6968f9b76 Jan 29 16:26:23 crc kubenswrapper[4895]: I0129 16:26:23.991086 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv"] Jan 29 16:26:23 crc kubenswrapper[4895]: E0129 16:26:23.997746 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qlzq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-l6b5x_openstack-operators(53136333-31ce-4a3c-9477-0dde82bc7ec0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:26:23 crc kubenswrapper[4895]: E0129 16:26:23.999343 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" podUID="53136333-31ce-4a3c-9477-0dde82bc7ec0" Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.000010 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr"] Jan 29 16:26:24 crc kubenswrapper[4895]: W0129 16:26:24.002082 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbd2491c_2587_47d3_8201_26b8e68bfcb7.slice/crio-45ec778fdb809c61b6d47979acb3f3bbdac50f492759c469c3cf26ddc07d90bc WatchSource:0}: Error finding container 45ec778fdb809c61b6d47979acb3f3bbdac50f492759c469c3cf26ddc07d90bc: Status 404 returned error can't find the container with id 45ec778fdb809c61b6d47979acb3f3bbdac50f492759c469c3cf26ddc07d90bc Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.008903 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk"] Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.009467 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lsmf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-b4wtv_openstack-operators(dbd2491c-2587-47d3-8201-26b8e68bfcb7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.011367 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" podUID="dbd2491c-2587-47d3-8201-26b8e68bfcb7" Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.065540 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4"] Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.074691 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c"] Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.084680 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-vcgf8"] Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.091584 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cdskb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-5z28c_openstack-operators(3c7eb208-8a46-49ed-8efd-1b0fceabd3c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.093442 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" podUID="3c7eb208-8a46-49ed-8efd-1b0fceabd3c8" Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.127750 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2dxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-vcgf8_openstack-operators(d3b8a7bf-6741-4bfe-8835-e942e688098d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.129247 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" podUID="d3b8a7bf-6741-4bfe-8835-e942e688098d" Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.171142 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dtmtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-dbmk4_openstack-operators(7b0b2050-5cdb-44e0-a858-bf6aa331d2c6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.172762 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4" podUID="7b0b2050-5cdb-44e0-a858-bf6aa331d2c6" Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.317372 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.317590 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.317716 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert podName:861c2b0f-fa28-408a-b270-a7e1f9ee57e2 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:26.317691695 +0000 UTC m=+870.120668949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" (UID: "861c2b0f-fa28-408a-b270-a7e1f9ee57e2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.381485 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" event={"ID":"91b335a1-04fb-48e6-bf93-8bce6c4da648","Type":"ContainerStarted","Data":"2d1061b0527fa4164a2ba491dcc50727b3a78a6c3a6c0386dc39c669485b0467"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.385121 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" event={"ID":"2fb0cdc6-64b5-432f-a998-26174db87dbb","Type":"ContainerStarted","Data":"4b7fe26293da3c170c199353945a2793e8a88c70cd617338d1b308e365e76ecd"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.387066 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq" event={"ID":"ebaae8a3-53e7-4aec-88cb-9723acd3350d","Type":"ContainerStarted","Data":"794b68741ecfc420335840dc884ffb6342d5e53ec1358a4cc8e6bf724bcf7606"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.388219 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq" event={"ID":"15381bb6-d539-48b4-976c-5b2a27fa7aaa","Type":"ContainerStarted","Data":"ba0649700d6a4c71651919e5614732c02cee434a14a2c02a4cbb94f6968f9b76"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.390686 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" event={"ID":"14c9beca-1f3d-42cb-91d2-f7e391a9761a","Type":"ContainerStarted","Data":"c2b48527d1cfc88959ea7a497d0425deb1473d36c52e68dce8ce18dc15abec7b"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.404155 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr" event={"ID":"9730658e-c8ca-4448-a3c0-68116c92840f","Type":"ContainerStarted","Data":"82528a4257e879c18c8c94c5583ee29cc093d431e747f2f14bd617acb452ea6d"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.407796 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8" event={"ID":"f5589d31-28a0-45e8-a3fa-9b48576c81fc","Type":"ContainerStarted","Data":"7b47cc8f79aed250460c2b88377c3cbfe568f38dcdfcdbfa1cf10ebfe64f6c04"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.412544 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" event={"ID":"0458d64d-6cee-41f7-bb2d-17fe71893b95","Type":"ContainerStarted","Data":"2ceb2836dd0bc3e7478194e4dfd589be3139541b2c72aecbd88120099ecec351"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.414543 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb" event={"ID":"a8dbc4ca-5e45-424e-aa1f-6c7e9e24e74c","Type":"ContainerStarted","Data":"5dc03640e0a2c7f3b10623f074c0f48b592366ca585e1a65389b851d5cd26c3b"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.417532 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-flr48" event={"ID":"928d4ebe-bbab-4956-9d41-a6ef3c91e62d","Type":"ContainerStarted","Data":"b1fecacd268d7f2d95b5be9112eb0597dd760fc6df28a6678a8b789c61add3be"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.420338 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk" event={"ID":"826ca63d-ce7b-4d52-9fb5-31bdbb523416","Type":"ContainerStarted","Data":"aa7f45a640b194a0367318e4665eff2b434c8fd6331ece59a60437deb529c15e"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.421794 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" event={"ID":"3c7eb208-8a46-49ed-8efd-1b0fceabd3c8","Type":"ContainerStarted","Data":"5a64f8e0c0d6b2e2d1d3e04e535f82aebdd0d945156f6652d31841e2de57327f"} Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.425789 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" podUID="3c7eb208-8a46-49ed-8efd-1b0fceabd3c8" Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.440023 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9" event={"ID":"0c18b9fd-01fa-4be9-be45-5ad49240591a","Type":"ContainerStarted","Data":"8220b4f550e3e45734c46d33fc267ef390eb2b324781e06cee221e79ec8c5d13"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.442026 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4" event={"ID":"7b0b2050-5cdb-44e0-a858-bf6aa331d2c6","Type":"ContainerStarted","Data":"afff3a1ed6e0adb75f84f159a069bf967bc815087a0ff37b7f1bfe788f6b8850"} Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.442977 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" event={"ID":"53136333-31ce-4a3c-9477-0dde82bc7ec0","Type":"ContainerStarted","Data":"0e621df95795da399a115f04a7d1f2c5b9b7111642c4128a62efe2e9b0c2a33d"} Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.443965 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4" podUID="7b0b2050-5cdb-44e0-a858-bf6aa331d2c6" Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.444691 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" event={"ID":"d3b8a7bf-6741-4bfe-8835-e942e688098d","Type":"ContainerStarted","Data":"573bb17e42cc6ec6a8340062840972f2992d98cd9d34e09d193a00fb9ce4e4db"} Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.444790 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" podUID="53136333-31ce-4a3c-9477-0dde82bc7ec0" Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.445642 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" podUID="d3b8a7bf-6741-4bfe-8835-e942e688098d" Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.447497 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" event={"ID":"dbd2491c-2587-47d3-8201-26b8e68bfcb7","Type":"ContainerStarted","Data":"45ec778fdb809c61b6d47979acb3f3bbdac50f492759c469c3cf26ddc07d90bc"} Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.464658 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" podUID="dbd2491c-2587-47d3-8201-26b8e68bfcb7" Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.519390 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:24 crc kubenswrapper[4895]: I0129 16:26:24.519463 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.519587 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.519636 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:26.519621754 +0000 UTC m=+870.322599018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "metrics-server-cert" not found Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.519977 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:26:24 crc kubenswrapper[4895]: E0129 16:26:24.520011 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:26.520003974 +0000 UTC m=+870.322981238 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "webhook-server-cert" not found Jan 29 16:26:25 crc kubenswrapper[4895]: E0129 16:26:25.521673 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" podUID="3c7eb208-8a46-49ed-8efd-1b0fceabd3c8" Jan 29 16:26:25 crc kubenswrapper[4895]: E0129 16:26:25.522158 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4" podUID="7b0b2050-5cdb-44e0-a858-bf6aa331d2c6" Jan 29 16:26:25 crc kubenswrapper[4895]: E0129 16:26:25.522233 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" podUID="53136333-31ce-4a3c-9477-0dde82bc7ec0" Jan 29 16:26:25 crc kubenswrapper[4895]: E0129 16:26:25.522281 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" podUID="dbd2491c-2587-47d3-8201-26b8e68bfcb7" Jan 29 16:26:25 crc kubenswrapper[4895]: E0129 16:26:25.526240 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" podUID="d3b8a7bf-6741-4bfe-8835-e942e688098d" Jan 29 16:26:25 crc kubenswrapper[4895]: I0129 16:26:25.950609 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:25 crc kubenswrapper[4895]: E0129 16:26:25.951177 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:25 crc kubenswrapper[4895]: E0129 16:26:25.951228 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert podName:5a7c85c3-b835-48f5-99ca-2c2949ab85bf nodeName:}" failed. No retries permitted until 2026-01-29 16:26:29.951210848 +0000 UTC m=+873.754188112 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert") pod "infra-operator-controller-manager-79955696d6-pd4hv" (UID: "5a7c85c3-b835-48f5-99ca-2c2949ab85bf") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:26 crc kubenswrapper[4895]: I0129 16:26:26.361615 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:26 crc kubenswrapper[4895]: E0129 16:26:26.361879 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:26 crc kubenswrapper[4895]: E0129 16:26:26.361934 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert podName:861c2b0f-fa28-408a-b270-a7e1f9ee57e2 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:30.361914969 +0000 UTC m=+874.164892233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" (UID: "861c2b0f-fa28-408a-b270-a7e1f9ee57e2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:26 crc kubenswrapper[4895]: I0129 16:26:26.564257 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:26 crc kubenswrapper[4895]: I0129 16:26:26.564388 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:26 crc kubenswrapper[4895]: E0129 16:26:26.564682 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:26:26 crc kubenswrapper[4895]: E0129 16:26:26.564721 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:26:26 crc kubenswrapper[4895]: E0129 16:26:26.564770 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:30.564747351 +0000 UTC m=+874.367724615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "metrics-server-cert" not found Jan 29 16:26:26 crc kubenswrapper[4895]: E0129 16:26:26.564839 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:30.564782412 +0000 UTC m=+874.367759766 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "webhook-server-cert" not found Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.012980 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:30 crc kubenswrapper[4895]: E0129 16:26:30.013410 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:30 crc kubenswrapper[4895]: E0129 16:26:30.013470 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert podName:5a7c85c3-b835-48f5-99ca-2c2949ab85bf nodeName:}" failed. No retries permitted until 2026-01-29 16:26:38.01344749 +0000 UTC m=+881.816424744 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert") pod "infra-operator-controller-manager-79955696d6-pd4hv" (UID: "5a7c85c3-b835-48f5-99ca-2c2949ab85bf") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.286839 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xgtj6"] Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.288831 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.303164 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgtj6"] Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.420516 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jsm6\" (UniqueName: \"kubernetes.io/projected/e67dca96-61b0-4b00-a504-d06571c94346-kube-api-access-4jsm6\") pod \"certified-operators-xgtj6\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.420576 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-utilities\") pod \"certified-operators-xgtj6\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.420679 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.420733 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-catalog-content\") pod \"certified-operators-xgtj6\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:26:30 crc kubenswrapper[4895]: E0129 16:26:30.420976 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:30 crc kubenswrapper[4895]: E0129 16:26:30.421036 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert podName:861c2b0f-fa28-408a-b270-a7e1f9ee57e2 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:38.421017507 +0000 UTC m=+882.223994771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" (UID: "861c2b0f-fa28-408a-b270-a7e1f9ee57e2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.522081 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-catalog-content\") pod \"certified-operators-xgtj6\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.522207 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jsm6\" (UniqueName: \"kubernetes.io/projected/e67dca96-61b0-4b00-a504-d06571c94346-kube-api-access-4jsm6\") pod \"certified-operators-xgtj6\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.522241 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-utilities\") pod \"certified-operators-xgtj6\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.522724 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-catalog-content\") pod \"certified-operators-xgtj6\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.522812 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-utilities\") pod \"certified-operators-xgtj6\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.555226 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jsm6\" (UniqueName: \"kubernetes.io/projected/e67dca96-61b0-4b00-a504-d06571c94346-kube-api-access-4jsm6\") pod \"certified-operators-xgtj6\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.623765 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.624387 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:30 crc kubenswrapper[4895]: E0129 16:26:30.624154 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:26:30 crc kubenswrapper[4895]: E0129 16:26:30.624523 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:38.624493827 +0000 UTC m=+882.427471091 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "metrics-server-cert" not found Jan 29 16:26:30 crc kubenswrapper[4895]: E0129 16:26:30.624585 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:26:30 crc kubenswrapper[4895]: E0129 16:26:30.624704 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:38.624639891 +0000 UTC m=+882.427617325 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "webhook-server-cert" not found Jan 29 16:26:30 crc kubenswrapper[4895]: I0129 16:26:30.656198 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.137048 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-27pfv"] Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.146139 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.156309 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-27pfv"] Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.240198 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-utilities\") pod \"redhat-operators-27pfv\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.240252 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-catalog-content\") pod \"redhat-operators-27pfv\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.240402 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnj8d\" (UniqueName: \"kubernetes.io/projected/507c5a6c-56bc-4fed-87c1-660e5786c979-kube-api-access-xnj8d\") pod \"redhat-operators-27pfv\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.343049 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnj8d\" (UniqueName: \"kubernetes.io/projected/507c5a6c-56bc-4fed-87c1-660e5786c979-kube-api-access-xnj8d\") pod \"redhat-operators-27pfv\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.343170 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-utilities\") pod \"redhat-operators-27pfv\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.343203 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-catalog-content\") pod \"redhat-operators-27pfv\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.344130 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-utilities\") pod \"redhat-operators-27pfv\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.344206 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-catalog-content\") pod \"redhat-operators-27pfv\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.366670 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnj8d\" (UniqueName: \"kubernetes.io/projected/507c5a6c-56bc-4fed-87c1-660e5786c979-kube-api-access-xnj8d\") pod \"redhat-operators-27pfv\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:26:36 crc kubenswrapper[4895]: I0129 16:26:36.481475 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:26:38 crc kubenswrapper[4895]: I0129 16:26:38.070003 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:38 crc kubenswrapper[4895]: E0129 16:26:38.070185 4895 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:38 crc kubenswrapper[4895]: E0129 16:26:38.070240 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert podName:5a7c85c3-b835-48f5-99ca-2c2949ab85bf nodeName:}" failed. No retries permitted until 2026-01-29 16:26:54.070220025 +0000 UTC m=+897.873197289 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert") pod "infra-operator-controller-manager-79955696d6-pd4hv" (UID: "5a7c85c3-b835-48f5-99ca-2c2949ab85bf") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:26:38 crc kubenswrapper[4895]: I0129 16:26:38.476339 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:38 crc kubenswrapper[4895]: E0129 16:26:38.476551 4895 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:38 crc kubenswrapper[4895]: E0129 16:26:38.476612 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert podName:861c2b0f-fa28-408a-b270-a7e1f9ee57e2 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:54.476593571 +0000 UTC m=+898.279570825 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" (UID: "861c2b0f-fa28-408a-b270-a7e1f9ee57e2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:26:38 crc kubenswrapper[4895]: I0129 16:26:38.680231 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:38 crc kubenswrapper[4895]: I0129 16:26:38.680859 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:38 crc kubenswrapper[4895]: E0129 16:26:38.680508 4895 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:26:38 crc kubenswrapper[4895]: E0129 16:26:38.681161 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:54.681143469 +0000 UTC m=+898.484120733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "webhook-server-cert" not found Jan 29 16:26:38 crc kubenswrapper[4895]: E0129 16:26:38.681085 4895 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:26:38 crc kubenswrapper[4895]: E0129 16:26:38.681743 4895 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs podName:7c8841b5-eefc-4ce3-bb5b-111a252e4316 nodeName:}" failed. No retries permitted until 2026-01-29 16:26:54.681700374 +0000 UTC m=+898.484677788 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs") pod "openstack-operator-controller-manager-6dbd47d457-rqtbg" (UID: "7c8841b5-eefc-4ce3-bb5b-111a252e4316") : secret "metrics-server-cert" not found Jan 29 16:26:42 crc kubenswrapper[4895]: E0129 16:26:42.799659 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 29 16:26:42 crc kubenswrapper[4895]: E0129 16:26:42.801164 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x8z9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-f7b6w_openstack-operators(91b335a1-04fb-48e6-bf93-8bce6c4da648): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:26:42 crc kubenswrapper[4895]: E0129 16:26:42.802580 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" podUID="91b335a1-04fb-48e6-bf93-8bce6c4da648" Jan 29 16:26:43 crc kubenswrapper[4895]: E0129 16:26:43.695697 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" podUID="91b335a1-04fb-48e6-bf93-8bce6c4da648" Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.197148 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g9bm8"] Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.199366 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.209830 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9bm8"] Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.346933 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-catalog-content\") pod \"community-operators-g9bm8\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.347032 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnr5q\" (UniqueName: \"kubernetes.io/projected/88503b6f-3ee9-4640-80a3-c1543f577b7f-kube-api-access-gnr5q\") pod \"community-operators-g9bm8\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.347105 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-utilities\") pod \"community-operators-g9bm8\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.448421 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-utilities\") pod \"community-operators-g9bm8\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.448534 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-catalog-content\") pod \"community-operators-g9bm8\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.448580 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnr5q\" (UniqueName: \"kubernetes.io/projected/88503b6f-3ee9-4640-80a3-c1543f577b7f-kube-api-access-gnr5q\") pod \"community-operators-g9bm8\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.449243 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-utilities\") pod \"community-operators-g9bm8\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.449319 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-catalog-content\") pod \"community-operators-g9bm8\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.474165 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnr5q\" (UniqueName: \"kubernetes.io/projected/88503b6f-3ee9-4640-80a3-c1543f577b7f-kube-api-access-gnr5q\") pod \"community-operators-g9bm8\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:26:47 crc kubenswrapper[4895]: I0129 16:26:47.532926 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:26:52 crc kubenswrapper[4895]: I0129 16:26:52.748167 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6p676"] Jan 29 16:26:52 crc kubenswrapper[4895]: I0129 16:26:52.752825 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:26:52 crc kubenswrapper[4895]: I0129 16:26:52.762584 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p676"] Jan 29 16:26:52 crc kubenswrapper[4895]: I0129 16:26:52.862286 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9czw\" (UniqueName: \"kubernetes.io/projected/5debaad0-76f7-41af-b13b-13844bd3b73b-kube-api-access-v9czw\") pod \"redhat-marketplace-6p676\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:26:52 crc kubenswrapper[4895]: I0129 16:26:52.862660 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-catalog-content\") pod \"redhat-marketplace-6p676\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:26:52 crc kubenswrapper[4895]: I0129 16:26:52.862789 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-utilities\") pod \"redhat-marketplace-6p676\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:26:52 crc kubenswrapper[4895]: I0129 16:26:52.964248 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-catalog-content\") pod \"redhat-marketplace-6p676\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:26:52 crc kubenswrapper[4895]: I0129 16:26:52.964333 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-utilities\") pod \"redhat-marketplace-6p676\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:26:52 crc kubenswrapper[4895]: I0129 16:26:52.964453 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9czw\" (UniqueName: \"kubernetes.io/projected/5debaad0-76f7-41af-b13b-13844bd3b73b-kube-api-access-v9czw\") pod \"redhat-marketplace-6p676\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:26:52 crc kubenswrapper[4895]: I0129 16:26:52.964964 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-utilities\") pod \"redhat-marketplace-6p676\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:26:52 crc kubenswrapper[4895]: I0129 16:26:52.965336 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-catalog-content\") pod \"redhat-marketplace-6p676\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:26:53 crc kubenswrapper[4895]: I0129 16:26:52.999771 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9czw\" (UniqueName: \"kubernetes.io/projected/5debaad0-76f7-41af-b13b-13844bd3b73b-kube-api-access-v9czw\") pod \"redhat-marketplace-6p676\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:26:53 crc kubenswrapper[4895]: I0129 16:26:53.134423 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:26:54 crc kubenswrapper[4895]: I0129 16:26:54.087419 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:54 crc kubenswrapper[4895]: I0129 16:26:54.097459 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a7c85c3-b835-48f5-99ca-2c2949ab85bf-cert\") pod \"infra-operator-controller-manager-79955696d6-pd4hv\" (UID: \"5a7c85c3-b835-48f5-99ca-2c2949ab85bf\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:54 crc kubenswrapper[4895]: I0129 16:26:54.240645 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:26:54 crc kubenswrapper[4895]: I0129 16:26:54.496439 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:54 crc kubenswrapper[4895]: I0129 16:26:54.531003 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/861c2b0f-fa28-408a-b270-a7e1f9ee57e2-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh\" (UID: \"861c2b0f-fa28-408a-b270-a7e1f9ee57e2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:54 crc kubenswrapper[4895]: I0129 16:26:54.682262 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:26:54 crc kubenswrapper[4895]: I0129 16:26:54.699648 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:54 crc kubenswrapper[4895]: I0129 16:26:54.699757 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:54 crc kubenswrapper[4895]: I0129 16:26:54.704689 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-metrics-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:54 crc kubenswrapper[4895]: I0129 16:26:54.706664 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7c8841b5-eefc-4ce3-bb5b-111a252e4316-webhook-certs\") pod \"openstack-operator-controller-manager-6dbd47d457-rqtbg\" (UID: \"7c8841b5-eefc-4ce3-bb5b-111a252e4316\") " pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:54 crc kubenswrapper[4895]: E0129 16:26:54.860008 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 29 16:26:54 crc kubenswrapper[4895]: E0129 16:26:54.860728 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dtwbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-96st6_openstack-operators(0458d64d-6cee-41f7-bb2d-17fe71893b95): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:26:54 crc kubenswrapper[4895]: E0129 16:26:54.862133 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" podUID="0458d64d-6cee-41f7-bb2d-17fe71893b95" Jan 29 16:26:54 crc kubenswrapper[4895]: I0129 16:26:54.987336 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:26:55 crc kubenswrapper[4895]: E0129 16:26:55.510276 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 29 16:26:55 crc kubenswrapper[4895]: E0129 16:26:55.510550 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kqgjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-cmhwf_openstack-operators(14c9beca-1f3d-42cb-91d2-f7e391a9761a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:26:55 crc kubenswrapper[4895]: E0129 16:26:55.512465 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" podUID="14c9beca-1f3d-42cb-91d2-f7e391a9761a" Jan 29 16:26:55 crc kubenswrapper[4895]: E0129 16:26:55.797203 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" podUID="0458d64d-6cee-41f7-bb2d-17fe71893b95" Jan 29 16:26:55 crc kubenswrapper[4895]: E0129 16:26:55.797685 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" podUID="14c9beca-1f3d-42cb-91d2-f7e391a9761a" Jan 29 16:26:56 crc kubenswrapper[4895]: E0129 16:26:56.935541 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 29 16:26:56 crc kubenswrapper[4895]: E0129 16:26:56.935983 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2n2lp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-29rf4_openstack-operators(2fb0cdc6-64b5-432f-a998-26174db87dbb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:26:56 crc kubenswrapper[4895]: E0129 16:26:56.937221 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" podUID="2fb0cdc6-64b5-432f-a998-26174db87dbb" Jan 29 16:26:57 crc kubenswrapper[4895]: I0129 16:26:57.044590 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:26:57 crc kubenswrapper[4895]: E0129 16:26:57.389813 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 16:26:57 crc kubenswrapper[4895]: E0129 16:26:57.390146 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lsmf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-b4wtv_openstack-operators(dbd2491c-2587-47d3-8201-26b8e68bfcb7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:26:57 crc kubenswrapper[4895]: E0129 16:26:57.391842 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" podUID="dbd2491c-2587-47d3-8201-26b8e68bfcb7" Jan 29 16:26:57 crc kubenswrapper[4895]: E0129 16:26:57.814792 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" podUID="2fb0cdc6-64b5-432f-a998-26174db87dbb" Jan 29 16:26:57 crc kubenswrapper[4895]: I0129 16:26:57.875003 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgtj6"] Jan 29 16:26:59 crc kubenswrapper[4895]: I0129 16:26:59.828851 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtj6" event={"ID":"e67dca96-61b0-4b00-a504-d06571c94346","Type":"ContainerStarted","Data":"c71f24932cf8512c4c87fb8726f3c27556900952d60b66f84f42b1d622f90da4"} Jan 29 16:27:00 crc kubenswrapper[4895]: I0129 16:27:00.569417 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9bm8"] Jan 29 16:27:00 crc kubenswrapper[4895]: W0129 16:27:00.956471 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88503b6f_3ee9_4640_80a3_c1543f577b7f.slice/crio-86414fb725d5aebefd7a84de97d0b2a13503d236564e397a88a9beb0ab849098 WatchSource:0}: Error finding container 86414fb725d5aebefd7a84de97d0b2a13503d236564e397a88a9beb0ab849098: Status 404 returned error can't find the container with id 86414fb725d5aebefd7a84de97d0b2a13503d236564e397a88a9beb0ab849098 Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.203487 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-27pfv"] Jan 29 16:27:01 crc kubenswrapper[4895]: W0129 16:27:01.282777 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod507c5a6c_56bc_4fed_87c1_660e5786c979.slice/crio-42c5072daacaecdc990b3f947e17664adc5e57be336c081caf06c716b8eb3bdd WatchSource:0}: Error finding container 42c5072daacaecdc990b3f947e17664adc5e57be336c081caf06c716b8eb3bdd: Status 404 returned error can't find the container with id 42c5072daacaecdc990b3f947e17664adc5e57be336c081caf06c716b8eb3bdd Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.517255 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p676"] Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.525966 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv"] Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.572462 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh"] Jan 29 16:27:01 crc kubenswrapper[4895]: W0129 16:27:01.606014 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5debaad0_76f7_41af_b13b_13844bd3b73b.slice/crio-ab7062373d11a3f1af5dd5fac095f5eeab637e36ab568b71610d3bdccc2041d6 WatchSource:0}: Error finding container ab7062373d11a3f1af5dd5fac095f5eeab637e36ab568b71610d3bdccc2041d6: Status 404 returned error can't find the container with id ab7062373d11a3f1af5dd5fac095f5eeab637e36ab568b71610d3bdccc2041d6 Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.653962 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg"] Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.853100 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p676" event={"ID":"5debaad0-76f7-41af-b13b-13844bd3b73b","Type":"ContainerStarted","Data":"ab7062373d11a3f1af5dd5fac095f5eeab637e36ab568b71610d3bdccc2041d6"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.857661 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv" event={"ID":"6377326a-b83d-43f6-bb58-fcf54eac8ac2","Type":"ContainerStarted","Data":"1799adeb8fbac1d46e52e11a88553bb510fe8faa43e6dd12ec398c0be683f447"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.858440 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv" Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.861154 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq" event={"ID":"ebaae8a3-53e7-4aec-88cb-9723acd3350d","Type":"ContainerStarted","Data":"5765bb4f0165ea76fd3e0b607ef57f0b4a478137f9cd5d53cbe1bdd00179f6e2"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.861790 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq" Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.867354 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9bm8" event={"ID":"88503b6f-3ee9-4640-80a3-c1543f577b7f","Type":"ContainerStarted","Data":"86414fb725d5aebefd7a84de97d0b2a13503d236564e397a88a9beb0ab849098"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.877933 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" event={"ID":"7c8841b5-eefc-4ce3-bb5b-111a252e4316","Type":"ContainerStarted","Data":"ab3c3f7f8273875445365358078e9b70c2553d3864e8b96ca844bc8f28b24633"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.883337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb" event={"ID":"a8dbc4ca-5e45-424e-aa1f-6c7e9e24e74c","Type":"ContainerStarted","Data":"139469e7480493f73719607e1f1a4b1f8c7ec18408c871eab7640cf179235f30"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.884385 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb" Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.888196 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8" event={"ID":"f5589d31-28a0-45e8-a3fa-9b48576c81fc","Type":"ContainerStarted","Data":"cde3713ec76d20f63858f7463618466eda7e8c50ceb2fb2dfd66076032935a8d"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.888362 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8" Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.892229 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" event={"ID":"861c2b0f-fa28-408a-b270-a7e1f9ee57e2","Type":"ContainerStarted","Data":"fbb9971279e5e25c985ef6b8d4641be35eb5d623b3f5a2537b59fd7f4c02a3eb"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.897947 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv" podStartSLOduration=7.511586211 podStartE2EDuration="40.897916361s" podCreationTimestamp="2026-01-29 16:26:21 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.058853976 +0000 UTC m=+866.861831240" lastFinishedPulling="2026-01-29 16:26:56.445184126 +0000 UTC m=+900.248161390" observedRunningTime="2026-01-29 16:27:01.892694703 +0000 UTC m=+905.695671977" watchObservedRunningTime="2026-01-29 16:27:01.897916361 +0000 UTC m=+905.700893635" Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.900533 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27pfv" event={"ID":"507c5a6c-56bc-4fed-87c1-660e5786c979","Type":"ContainerStarted","Data":"42c5072daacaecdc990b3f947e17664adc5e57be336c081caf06c716b8eb3bdd"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.907497 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" event={"ID":"5a7c85c3-b835-48f5-99ca-2c2949ab85bf","Type":"ContainerStarted","Data":"1f71d81cfbe6f66d9c1e5e637cb4aacaaf306c445f591ec17cb1031e3df20de8"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.909129 4895 generic.go:334] "Generic (PLEG): container finished" podID="e67dca96-61b0-4b00-a504-d06571c94346" containerID="518c518aa41d512f79e9457a561fc8275e3db63f5b667caabedf3349c68bef26" exitCode=0 Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.910044 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtj6" event={"ID":"e67dca96-61b0-4b00-a504-d06571c94346","Type":"ContainerDied","Data":"518c518aa41d512f79e9457a561fc8275e3db63f5b667caabedf3349c68bef26"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.933831 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8" podStartSLOduration=8.921360716 podStartE2EDuration="40.933792954s" podCreationTimestamp="2026-01-29 16:26:21 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.458971944 +0000 UTC m=+867.261949208" lastFinishedPulling="2026-01-29 16:26:55.471404182 +0000 UTC m=+899.274381446" observedRunningTime="2026-01-29 16:27:01.927663951 +0000 UTC m=+905.730641235" watchObservedRunningTime="2026-01-29 16:27:01.933792954 +0000 UTC m=+905.736770228" Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.941326 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk" event={"ID":"e605b5cd-74f0-4c19-b7e7-9f726595eeb5","Type":"ContainerStarted","Data":"fc3ba214a07a5dc9d8072f4fdd0bfd94c2b898768085e49933251d120fa490a1"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.942066 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk" Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.956133 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-flr48" event={"ID":"928d4ebe-bbab-4956-9d41-a6ef3c91e62d","Type":"ContainerStarted","Data":"66683cb45b0885c6c879db3d927446ee2544ff803d966d0176cd450d792e4a05"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.956257 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-flr48" Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.960572 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb" podStartSLOduration=7.957374521 podStartE2EDuration="40.960548423s" podCreationTimestamp="2026-01-29 16:26:21 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.441791568 +0000 UTC m=+867.244768822" lastFinishedPulling="2026-01-29 16:26:56.44496545 +0000 UTC m=+900.247942724" observedRunningTime="2026-01-29 16:27:01.955424117 +0000 UTC m=+905.758401391" watchObservedRunningTime="2026-01-29 16:27:01.960548423 +0000 UTC m=+905.763525687" Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.964714 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9" event={"ID":"0c18b9fd-01fa-4be9-be45-5ad49240591a","Type":"ContainerStarted","Data":"02942a1ae3700710f82c5f9af08fb97644f7fdeb88457b0123c75021d80fa47d"} Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.965792 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9" Jan 29 16:27:01 crc kubenswrapper[4895]: I0129 16:27:01.989659 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq" podStartSLOduration=7.524922234 podStartE2EDuration="39.989635646s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.981262386 +0000 UTC m=+867.784239650" lastFinishedPulling="2026-01-29 16:26:56.445975788 +0000 UTC m=+900.248953062" observedRunningTime="2026-01-29 16:27:01.986397429 +0000 UTC m=+905.789374723" watchObservedRunningTime="2026-01-29 16:27:01.989635646 +0000 UTC m=+905.792612900" Jan 29 16:27:02 crc kubenswrapper[4895]: I0129 16:27:02.023755 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-flr48" podStartSLOduration=8.04661251 podStartE2EDuration="41.02372232s" podCreationTimestamp="2026-01-29 16:26:21 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.46785181 +0000 UTC m=+867.270829064" lastFinishedPulling="2026-01-29 16:26:56.44496161 +0000 UTC m=+900.247938874" observedRunningTime="2026-01-29 16:27:02.015254776 +0000 UTC m=+905.818232040" watchObservedRunningTime="2026-01-29 16:27:02.02372232 +0000 UTC m=+905.826699604" Jan 29 16:27:02 crc kubenswrapper[4895]: I0129 16:27:02.058363 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9" podStartSLOduration=7.662898747 podStartE2EDuration="41.058337819s" podCreationTimestamp="2026-01-29 16:26:21 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.977816075 +0000 UTC m=+867.780793339" lastFinishedPulling="2026-01-29 16:26:57.373255147 +0000 UTC m=+901.176232411" observedRunningTime="2026-01-29 16:27:02.055431762 +0000 UTC m=+905.858409036" watchObservedRunningTime="2026-01-29 16:27:02.058337819 +0000 UTC m=+905.861315083" Jan 29 16:27:02 crc kubenswrapper[4895]: I0129 16:27:02.082639 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk" podStartSLOduration=7.963246787 podStartE2EDuration="41.082610593s" podCreationTimestamp="2026-01-29 16:26:21 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.325962414 +0000 UTC m=+867.128939678" lastFinishedPulling="2026-01-29 16:26:56.44532619 +0000 UTC m=+900.248303484" observedRunningTime="2026-01-29 16:27:02.082003057 +0000 UTC m=+905.884980341" watchObservedRunningTime="2026-01-29 16:27:02.082610593 +0000 UTC m=+905.885587857" Jan 29 16:27:02 crc kubenswrapper[4895]: I0129 16:27:02.992804 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk" event={"ID":"826ca63d-ce7b-4d52-9fb5-31bdbb523416","Type":"ContainerStarted","Data":"8347749fea1e265465624d3d539057e6cc258b4483a58653421221b331720e3f"} Jan 29 16:27:02 crc kubenswrapper[4895]: I0129 16:27:02.994703 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.020305 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" event={"ID":"7c8841b5-eefc-4ce3-bb5b-111a252e4316","Type":"ContainerStarted","Data":"619252713bbf105fae5657bf28ebda2795308caac519538eef791eeaef544391"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.021521 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.072219 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk" podStartSLOduration=7.688600958 podStartE2EDuration="41.072189226s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.987188933 +0000 UTC m=+867.790166197" lastFinishedPulling="2026-01-29 16:26:57.370777201 +0000 UTC m=+901.173754465" observedRunningTime="2026-01-29 16:27:03.071487688 +0000 UTC m=+906.874464962" watchObservedRunningTime="2026-01-29 16:27:03.072189226 +0000 UTC m=+906.875166490" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.081566 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.081618 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.081635 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq" event={"ID":"15381bb6-d539-48b4-976c-5b2a27fa7aaa","Type":"ContainerStarted","Data":"4b60c16378534a7130dc8be6691639f1e77b91a35a8d22f74d17e18a5c548065"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.081662 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr" event={"ID":"9730658e-c8ca-4448-a3c0-68116c92840f","Type":"ContainerStarted","Data":"09b068f97cad1e3eec6abeac4469a619d7a5bb6efaa281ac8c360a500220fc4c"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.081676 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" event={"ID":"3c7eb208-8a46-49ed-8efd-1b0fceabd3c8","Type":"ContainerStarted","Data":"3aa4d637f924b24f47bf2b04a5b71797c9ee532f1a2a908c62ccc0de32eee797"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.082630 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.098194 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4" event={"ID":"7b0b2050-5cdb-44e0-a858-bf6aa331d2c6","Type":"ContainerStarted","Data":"2d6befbda3bf56eb42626144cdbfbe432e23b6b02f94657d252a033fb9f497b0"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.139344 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" event={"ID":"d3b8a7bf-6741-4bfe-8835-e942e688098d","Type":"ContainerStarted","Data":"239b82ea359638ce4bcbb16d400ada4b009e933ed6d1c2d10db2f214763202f3"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.140533 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.162960 4895 generic.go:334] "Generic (PLEG): container finished" podID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerID="208d929df4730864516520734a9be36458962aa0c30be2010216a54816b0e2b4" exitCode=0 Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.163049 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9bm8" event={"ID":"88503b6f-3ee9-4640-80a3-c1543f577b7f","Type":"ContainerDied","Data":"208d929df4730864516520734a9be36458962aa0c30be2010216a54816b0e2b4"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.178032 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv" event={"ID":"1e62c1e2-2a44-4985-a787-ad3cfaa3ba5d","Type":"ContainerStarted","Data":"b623002d9c2b710178edb9a826c1899b258cde38494319d8f9c4e1c2577a2f9a"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.178120 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.197965 4895 generic.go:334] "Generic (PLEG): container finished" podID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerID="d5eac9e91522c901cad128df442c9d796b70af3812876059321fb5c721300c75" exitCode=0 Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.198108 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p676" event={"ID":"5debaad0-76f7-41af-b13b-13844bd3b73b","Type":"ContainerDied","Data":"d5eac9e91522c901cad128df442c9d796b70af3812876059321fb5c721300c75"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.199317 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" podStartSLOduration=41.199287429 podStartE2EDuration="41.199287429s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:27:03.19628973 +0000 UTC m=+906.999267004" watchObservedRunningTime="2026-01-29 16:27:03.199287429 +0000 UTC m=+907.002264693" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.201942 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" event={"ID":"91b335a1-04fb-48e6-bf93-8bce6c4da648","Type":"ContainerStarted","Data":"003e0433cd750b8a0d05bffdad18073881111028ddbd49d5992a378515135904"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.211440 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.213604 4895 generic.go:334] "Generic (PLEG): container finished" podID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerID="c854da4239a56f77065d867426479a898e79f6a5ab7e4ca1fc0df3c08ca22e28" exitCode=0 Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.213728 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27pfv" event={"ID":"507c5a6c-56bc-4fed-87c1-660e5786c979","Type":"ContainerDied","Data":"c854da4239a56f77065d867426479a898e79f6a5ab7e4ca1fc0df3c08ca22e28"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.247371 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" event={"ID":"53136333-31ce-4a3c-9477-0dde82bc7ec0","Type":"ContainerStarted","Data":"c518c6438a919d23f31f16d36a6d91f84bda1d4ed6e091d628e5195f7242abd5"} Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.247845 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.354908 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq" podStartSLOduration=7.970753426 podStartE2EDuration="41.354888349s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.987380038 +0000 UTC m=+867.790357302" lastFinishedPulling="2026-01-29 16:26:57.371514961 +0000 UTC m=+901.174492225" observedRunningTime="2026-01-29 16:27:03.346978969 +0000 UTC m=+907.149956243" watchObservedRunningTime="2026-01-29 16:27:03.354888349 +0000 UTC m=+907.157865623" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.422360 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr" podStartSLOduration=8.0360808 podStartE2EDuration="41.422337979s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.986920676 +0000 UTC m=+867.789897930" lastFinishedPulling="2026-01-29 16:26:57.373177845 +0000 UTC m=+901.176155109" observedRunningTime="2026-01-29 16:27:03.419369241 +0000 UTC m=+907.222346505" watchObservedRunningTime="2026-01-29 16:27:03.422337979 +0000 UTC m=+907.225315243" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.576832 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" podStartSLOduration=4.6433699090000005 podStartE2EDuration="41.576802219s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="2026-01-29 16:26:24.12760818 +0000 UTC m=+867.930585444" lastFinishedPulling="2026-01-29 16:27:01.06104048 +0000 UTC m=+904.864017754" observedRunningTime="2026-01-29 16:27:03.574850187 +0000 UTC m=+907.377827461" watchObservedRunningTime="2026-01-29 16:27:03.576802219 +0000 UTC m=+907.379779483" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.635106 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" podStartSLOduration=4.67134283 podStartE2EDuration="41.635079256s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="2026-01-29 16:26:24.091386458 +0000 UTC m=+867.894363722" lastFinishedPulling="2026-01-29 16:27:01.055122884 +0000 UTC m=+904.858100148" observedRunningTime="2026-01-29 16:27:03.631987794 +0000 UTC m=+907.434965068" watchObservedRunningTime="2026-01-29 16:27:03.635079256 +0000 UTC m=+907.438056520" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.664822 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv" podStartSLOduration=10.18563125 podStartE2EDuration="42.664799405s" podCreationTimestamp="2026-01-29 16:26:21 +0000 UTC" firstStartedPulling="2026-01-29 16:26:22.987246235 +0000 UTC m=+866.790223499" lastFinishedPulling="2026-01-29 16:26:55.46641439 +0000 UTC m=+899.269391654" observedRunningTime="2026-01-29 16:27:03.656735781 +0000 UTC m=+907.459713045" watchObservedRunningTime="2026-01-29 16:27:03.664799405 +0000 UTC m=+907.467776669" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.686974 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" podStartSLOduration=4.572904849 podStartE2EDuration="41.686949063s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.993703196 +0000 UTC m=+867.796680460" lastFinishedPulling="2026-01-29 16:27:01.10774741 +0000 UTC m=+904.910724674" observedRunningTime="2026-01-29 16:27:03.683153361 +0000 UTC m=+907.486130655" watchObservedRunningTime="2026-01-29 16:27:03.686949063 +0000 UTC m=+907.489926327" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.775603 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dbmk4" podStartSLOduration=4.713086438 podStartE2EDuration="41.775581394s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="2026-01-29 16:26:24.170970671 +0000 UTC m=+867.973947935" lastFinishedPulling="2026-01-29 16:27:01.233465627 +0000 UTC m=+905.036442891" observedRunningTime="2026-01-29 16:27:03.771210839 +0000 UTC m=+907.574188123" watchObservedRunningTime="2026-01-29 16:27:03.775581394 +0000 UTC m=+907.578558658" Jan 29 16:27:03 crc kubenswrapper[4895]: I0129 16:27:03.777047 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" podStartSLOduration=5.739530912 podStartE2EDuration="42.777039144s" podCreationTimestamp="2026-01-29 16:26:21 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.997632491 +0000 UTC m=+867.800609755" lastFinishedPulling="2026-01-29 16:27:01.035140713 +0000 UTC m=+904.838117987" observedRunningTime="2026-01-29 16:27:03.745363902 +0000 UTC m=+907.548341176" watchObservedRunningTime="2026-01-29 16:27:03.777039144 +0000 UTC m=+907.580016408" Jan 29 16:27:04 crc kubenswrapper[4895]: I0129 16:27:04.265407 4895 generic.go:334] "Generic (PLEG): container finished" podID="e67dca96-61b0-4b00-a504-d06571c94346" containerID="f0e495d3c263605d26f0867722ee0d376f117365de1f007063d97336e553a70b" exitCode=0 Jan 29 16:27:04 crc kubenswrapper[4895]: I0129 16:27:04.265595 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtj6" event={"ID":"e67dca96-61b0-4b00-a504-d06571c94346","Type":"ContainerDied","Data":"f0e495d3c263605d26f0867722ee0d376f117365de1f007063d97336e553a70b"} Jan 29 16:27:04 crc kubenswrapper[4895]: I0129 16:27:04.269858 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9bm8" event={"ID":"88503b6f-3ee9-4640-80a3-c1543f577b7f","Type":"ContainerStarted","Data":"664cc31f419279811b2553799c97f6735d30e222dbb7841f0c33d130e1c9b06d"} Jan 29 16:27:05 crc kubenswrapper[4895]: I0129 16:27:05.286042 4895 generic.go:334] "Generic (PLEG): container finished" podID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerID="e42e3f5c47fed4fe4542250ccf83b67c74addecf11b2ab36dd35a7a7d59808fc" exitCode=0 Jan 29 16:27:05 crc kubenswrapper[4895]: I0129 16:27:05.286225 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p676" event={"ID":"5debaad0-76f7-41af-b13b-13844bd3b73b","Type":"ContainerDied","Data":"e42e3f5c47fed4fe4542250ccf83b67c74addecf11b2ab36dd35a7a7d59808fc"} Jan 29 16:27:05 crc kubenswrapper[4895]: I0129 16:27:05.293169 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27pfv" event={"ID":"507c5a6c-56bc-4fed-87c1-660e5786c979","Type":"ContainerStarted","Data":"cdebb37ff51499d66a903d406378593e41172d97032f7338f13b38ad23d1133c"} Jan 29 16:27:05 crc kubenswrapper[4895]: I0129 16:27:05.296968 4895 generic.go:334] "Generic (PLEG): container finished" podID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerID="664cc31f419279811b2553799c97f6735d30e222dbb7841f0c33d130e1c9b06d" exitCode=0 Jan 29 16:27:05 crc kubenswrapper[4895]: I0129 16:27:05.297225 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9bm8" event={"ID":"88503b6f-3ee9-4640-80a3-c1543f577b7f","Type":"ContainerDied","Data":"664cc31f419279811b2553799c97f6735d30e222dbb7841f0c33d130e1c9b06d"} Jan 29 16:27:06 crc kubenswrapper[4895]: I0129 16:27:06.310136 4895 generic.go:334] "Generic (PLEG): container finished" podID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerID="cdebb37ff51499d66a903d406378593e41172d97032f7338f13b38ad23d1133c" exitCode=0 Jan 29 16:27:06 crc kubenswrapper[4895]: I0129 16:27:06.310225 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27pfv" event={"ID":"507c5a6c-56bc-4fed-87c1-660e5786c979","Type":"ContainerDied","Data":"cdebb37ff51499d66a903d406378593e41172d97032f7338f13b38ad23d1133c"} Jan 29 16:27:08 crc kubenswrapper[4895]: E0129 16:27:08.038396 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" podUID="dbd2491c-2587-47d3-8201-26b8e68bfcb7" Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.349344 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27pfv" event={"ID":"507c5a6c-56bc-4fed-87c1-660e5786c979","Type":"ContainerStarted","Data":"317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618"} Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.351265 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" event={"ID":"5a7c85c3-b835-48f5-99ca-2c2949ab85bf","Type":"ContainerStarted","Data":"2adba2799eef68471afbc7d2a1ac40fb80c7cba29aa2462ccbb4b39a7a73cc36"} Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.351351 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.355350 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" event={"ID":"0458d64d-6cee-41f7-bb2d-17fe71893b95","Type":"ContainerStarted","Data":"3cc9b2702daac7fbd6b69d20fac995e1b0d6121488cc2f5d7ca0a920c8161158"} Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.355983 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.357424 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtj6" event={"ID":"e67dca96-61b0-4b00-a504-d06571c94346","Type":"ContainerStarted","Data":"616268ab8ff9a68a56f9200542fe39760d520f7675a00c9fbd64b4e8dc6ef53f"} Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.361998 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9bm8" event={"ID":"88503b6f-3ee9-4640-80a3-c1543f577b7f","Type":"ContainerStarted","Data":"137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b"} Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.364243 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p676" event={"ID":"5debaad0-76f7-41af-b13b-13844bd3b73b","Type":"ContainerStarted","Data":"0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a"} Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.367341 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" event={"ID":"861c2b0f-fa28-408a-b270-a7e1f9ee57e2","Type":"ContainerStarted","Data":"204f09de07ad5eba0f7dadeafae2eaf78dde15d94d6f688d86adde68186d5cef"} Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.367599 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.374481 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-27pfv" podStartSLOduration=27.690397363 podStartE2EDuration="32.374456552s" podCreationTimestamp="2026-01-29 16:26:36 +0000 UTC" firstStartedPulling="2026-01-29 16:27:03.224134279 +0000 UTC m=+907.027111543" lastFinishedPulling="2026-01-29 16:27:07.908193468 +0000 UTC m=+911.711170732" observedRunningTime="2026-01-29 16:27:08.368175862 +0000 UTC m=+912.171153126" watchObservedRunningTime="2026-01-29 16:27:08.374456552 +0000 UTC m=+912.177433816" Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.396528 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" podStartSLOduration=41.971407565 podStartE2EDuration="47.396505559s" podCreationTimestamp="2026-01-29 16:26:21 +0000 UTC" firstStartedPulling="2026-01-29 16:27:01.686934392 +0000 UTC m=+905.489911656" lastFinishedPulling="2026-01-29 16:27:07.112032386 +0000 UTC m=+910.915009650" observedRunningTime="2026-01-29 16:27:08.394402002 +0000 UTC m=+912.197379276" watchObservedRunningTime="2026-01-29 16:27:08.396505559 +0000 UTC m=+912.199482833" Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.469718 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" podStartSLOduration=41.055772611 podStartE2EDuration="46.469696772s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="2026-01-29 16:27:01.686409018 +0000 UTC m=+905.489386282" lastFinishedPulling="2026-01-29 16:27:07.100333139 +0000 UTC m=+910.903310443" observedRunningTime="2026-01-29 16:27:08.443146482 +0000 UTC m=+912.246123746" watchObservedRunningTime="2026-01-29 16:27:08.469696772 +0000 UTC m=+912.272674036" Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.471364 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xgtj6" podStartSLOduration=34.259348745 podStartE2EDuration="38.471357417s" podCreationTimestamp="2026-01-29 16:26:30 +0000 UTC" firstStartedPulling="2026-01-29 16:27:01.913238938 +0000 UTC m=+905.716216202" lastFinishedPulling="2026-01-29 16:27:06.12524757 +0000 UTC m=+909.928224874" observedRunningTime="2026-01-29 16:27:08.468790247 +0000 UTC m=+912.271767511" watchObservedRunningTime="2026-01-29 16:27:08.471357417 +0000 UTC m=+912.274334681" Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.490793 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" podStartSLOduration=4.312726604 podStartE2EDuration="47.490773223s" podCreationTimestamp="2026-01-29 16:26:21 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.977509657 +0000 UTC m=+867.780486921" lastFinishedPulling="2026-01-29 16:27:07.155556276 +0000 UTC m=+910.958533540" observedRunningTime="2026-01-29 16:27:08.488296416 +0000 UTC m=+912.291273690" watchObservedRunningTime="2026-01-29 16:27:08.490773223 +0000 UTC m=+912.293750487" Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.520294 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g9bm8" podStartSLOduration=17.552070863 podStartE2EDuration="21.520269023s" podCreationTimestamp="2026-01-29 16:26:47 +0000 UTC" firstStartedPulling="2026-01-29 16:27:03.166018427 +0000 UTC m=+906.968995691" lastFinishedPulling="2026-01-29 16:27:07.134216547 +0000 UTC m=+910.937193851" observedRunningTime="2026-01-29 16:27:08.512314727 +0000 UTC m=+912.315292001" watchObservedRunningTime="2026-01-29 16:27:08.520269023 +0000 UTC m=+912.323246287" Jan 29 16:27:08 crc kubenswrapper[4895]: I0129 16:27:08.534926 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6p676" podStartSLOduration=12.567369691 podStartE2EDuration="16.534903359s" podCreationTimestamp="2026-01-29 16:26:52 +0000 UTC" firstStartedPulling="2026-01-29 16:27:03.210449986 +0000 UTC m=+907.013427250" lastFinishedPulling="2026-01-29 16:27:07.177983654 +0000 UTC m=+910.980960918" observedRunningTime="2026-01-29 16:27:08.529500312 +0000 UTC m=+912.332477576" watchObservedRunningTime="2026-01-29 16:27:08.534903359 +0000 UTC m=+912.337880623" Jan 29 16:27:10 crc kubenswrapper[4895]: I0129 16:27:10.656551 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:27:10 crc kubenswrapper[4895]: I0129 16:27:10.658297 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:27:10 crc kubenswrapper[4895]: I0129 16:27:10.714591 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.142087 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-bf99b56bc-929qv" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.169664 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-6hjs8" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.187935 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-l6jpv" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.215226 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-flr48" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.335033 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-vwdtk" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.430722 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-46kvb" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.579846 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qwjw9" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.599578 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-96st6" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.626858 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-l6b5x" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.664023 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4v7lq" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.839116 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jndgk" Jan 29 16:27:12 crc kubenswrapper[4895]: I0129 16:27:12.899948 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2xzrq" Jan 29 16:27:13 crc kubenswrapper[4895]: I0129 16:27:13.086014 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-5z28c" Jan 29 16:27:13 crc kubenswrapper[4895]: I0129 16:27:13.096764 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-8vvcr" Jan 29 16:27:13 crc kubenswrapper[4895]: I0129 16:27:13.149695 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:27:13 crc kubenswrapper[4895]: I0129 16:27:13.155632 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:27:13 crc kubenswrapper[4895]: I0129 16:27:13.162988 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-vcgf8" Jan 29 16:27:13 crc kubenswrapper[4895]: I0129 16:27:13.163124 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-f7b6w" Jan 29 16:27:13 crc kubenswrapper[4895]: I0129 16:27:13.231932 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:27:13 crc kubenswrapper[4895]: I0129 16:27:13.456735 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:27:14 crc kubenswrapper[4895]: I0129 16:27:14.247217 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-pd4hv" Jan 29 16:27:14 crc kubenswrapper[4895]: I0129 16:27:14.690506 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh" Jan 29 16:27:14 crc kubenswrapper[4895]: I0129 16:27:14.995250 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6dbd47d457-rqtbg" Jan 29 16:27:15 crc kubenswrapper[4895]: I0129 16:27:15.485063 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p676"] Jan 29 16:27:16 crc kubenswrapper[4895]: I0129 16:27:16.439761 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6p676" podUID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerName="registry-server" containerID="cri-o://0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a" gracePeriod=2 Jan 29 16:27:16 crc kubenswrapper[4895]: I0129 16:27:16.482077 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:27:16 crc kubenswrapper[4895]: I0129 16:27:16.482198 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:27:16 crc kubenswrapper[4895]: I0129 16:27:16.541007 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:27:17 crc kubenswrapper[4895]: I0129 16:27:17.518722 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:27:17 crc kubenswrapper[4895]: I0129 16:27:17.534039 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:27:17 crc kubenswrapper[4895]: I0129 16:27:17.534126 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:27:17 crc kubenswrapper[4895]: I0129 16:27:17.607262 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:27:18 crc kubenswrapper[4895]: I0129 16:27:18.535725 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:27:18 crc kubenswrapper[4895]: I0129 16:27:18.887535 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-27pfv"] Jan 29 16:27:19 crc kubenswrapper[4895]: I0129 16:27:19.479027 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-27pfv" podUID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerName="registry-server" containerID="cri-o://317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618" gracePeriod=2 Jan 29 16:27:20 crc kubenswrapper[4895]: I0129 16:27:20.732124 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:27:21 crc kubenswrapper[4895]: I0129 16:27:21.284151 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9bm8"] Jan 29 16:27:21 crc kubenswrapper[4895]: I0129 16:27:21.495326 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g9bm8" podUID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerName="registry-server" containerID="cri-o://137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b" gracePeriod=2 Jan 29 16:27:22 crc kubenswrapper[4895]: I0129 16:27:22.507042 4895 generic.go:334] "Generic (PLEG): container finished" podID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerID="0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a" exitCode=0 Jan 29 16:27:22 crc kubenswrapper[4895]: I0129 16:27:22.507098 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p676" event={"ID":"5debaad0-76f7-41af-b13b-13844bd3b73b","Type":"ContainerDied","Data":"0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a"} Jan 29 16:27:23 crc kubenswrapper[4895]: E0129 16:27:23.135622 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a is running failed: container process not found" containerID="0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:27:23 crc kubenswrapper[4895]: E0129 16:27:23.136424 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a is running failed: container process not found" containerID="0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:27:23 crc kubenswrapper[4895]: E0129 16:27:23.136910 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a is running failed: container process not found" containerID="0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:27:23 crc kubenswrapper[4895]: E0129 16:27:23.137012 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-6p676" podUID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerName="registry-server" Jan 29 16:27:23 crc kubenswrapper[4895]: I0129 16:27:23.535547 4895 generic.go:334] "Generic (PLEG): container finished" podID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerID="317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618" exitCode=0 Jan 29 16:27:23 crc kubenswrapper[4895]: I0129 16:27:23.535679 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27pfv" event={"ID":"507c5a6c-56bc-4fed-87c1-660e5786c979","Type":"ContainerDied","Data":"317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618"} Jan 29 16:27:23 crc kubenswrapper[4895]: I0129 16:27:23.679512 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgtj6"] Jan 29 16:27:23 crc kubenswrapper[4895]: I0129 16:27:23.679781 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xgtj6" podUID="e67dca96-61b0-4b00-a504-d06571c94346" containerName="registry-server" containerID="cri-o://616268ab8ff9a68a56f9200542fe39760d520f7675a00c9fbd64b4e8dc6ef53f" gracePeriod=2 Jan 29 16:27:24 crc kubenswrapper[4895]: I0129 16:27:24.551601 4895 generic.go:334] "Generic (PLEG): container finished" podID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerID="137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b" exitCode=0 Jan 29 16:27:24 crc kubenswrapper[4895]: I0129 16:27:24.551696 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9bm8" event={"ID":"88503b6f-3ee9-4640-80a3-c1543f577b7f","Type":"ContainerDied","Data":"137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b"} Jan 29 16:27:25 crc kubenswrapper[4895]: I0129 16:27:25.565778 4895 generic.go:334] "Generic (PLEG): container finished" podID="e67dca96-61b0-4b00-a504-d06571c94346" containerID="616268ab8ff9a68a56f9200542fe39760d520f7675a00c9fbd64b4e8dc6ef53f" exitCode=0 Jan 29 16:27:25 crc kubenswrapper[4895]: I0129 16:27:25.565845 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtj6" event={"ID":"e67dca96-61b0-4b00-a504-d06571c94346","Type":"ContainerDied","Data":"616268ab8ff9a68a56f9200542fe39760d520f7675a00c9fbd64b4e8dc6ef53f"} Jan 29 16:27:26 crc kubenswrapper[4895]: E0129 16:27:26.483174 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618 is running failed: container process not found" containerID="317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:27:26 crc kubenswrapper[4895]: E0129 16:27:26.483898 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618 is running failed: container process not found" containerID="317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:27:26 crc kubenswrapper[4895]: E0129 16:27:26.484141 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618 is running failed: container process not found" containerID="317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:27:26 crc kubenswrapper[4895]: E0129 16:27:26.484169 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-27pfv" podUID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerName="registry-server" Jan 29 16:27:27 crc kubenswrapper[4895]: E0129 16:27:27.533671 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b is running failed: container process not found" containerID="137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:27:27 crc kubenswrapper[4895]: E0129 16:27:27.535212 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b is running failed: container process not found" containerID="137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:27:27 crc kubenswrapper[4895]: E0129 16:27:27.535831 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b is running failed: container process not found" containerID="137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:27:27 crc kubenswrapper[4895]: E0129 16:27:27.535897 4895 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-g9bm8" podUID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerName="registry-server" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.103426 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.190921 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9czw\" (UniqueName: \"kubernetes.io/projected/5debaad0-76f7-41af-b13b-13844bd3b73b-kube-api-access-v9czw\") pod \"5debaad0-76f7-41af-b13b-13844bd3b73b\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.191407 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-utilities\") pod \"5debaad0-76f7-41af-b13b-13844bd3b73b\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.191658 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-catalog-content\") pod \"5debaad0-76f7-41af-b13b-13844bd3b73b\" (UID: \"5debaad0-76f7-41af-b13b-13844bd3b73b\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.198216 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-utilities" (OuterVolumeSpecName: "utilities") pod "5debaad0-76f7-41af-b13b-13844bd3b73b" (UID: "5debaad0-76f7-41af-b13b-13844bd3b73b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.201443 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5debaad0-76f7-41af-b13b-13844bd3b73b-kube-api-access-v9czw" (OuterVolumeSpecName: "kube-api-access-v9czw") pod "5debaad0-76f7-41af-b13b-13844bd3b73b" (UID: "5debaad0-76f7-41af-b13b-13844bd3b73b"). InnerVolumeSpecName "kube-api-access-v9czw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.219089 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5debaad0-76f7-41af-b13b-13844bd3b73b" (UID: "5debaad0-76f7-41af-b13b-13844bd3b73b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.293609 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.293654 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5debaad0-76f7-41af-b13b-13844bd3b73b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.293672 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9czw\" (UniqueName: \"kubernetes.io/projected/5debaad0-76f7-41af-b13b-13844bd3b73b-kube-api-access-v9czw\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.398825 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.458783 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.472274 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.541125 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-catalog-content\") pod \"e67dca96-61b0-4b00-a504-d06571c94346\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.541263 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-utilities\") pod \"e67dca96-61b0-4b00-a504-d06571c94346\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.541375 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jsm6\" (UniqueName: \"kubernetes.io/projected/e67dca96-61b0-4b00-a504-d06571c94346-kube-api-access-4jsm6\") pod \"e67dca96-61b0-4b00-a504-d06571c94346\" (UID: \"e67dca96-61b0-4b00-a504-d06571c94346\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.544550 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-utilities" (OuterVolumeSpecName: "utilities") pod "e67dca96-61b0-4b00-a504-d06571c94346" (UID: "e67dca96-61b0-4b00-a504-d06571c94346"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.551164 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67dca96-61b0-4b00-a504-d06571c94346-kube-api-access-4jsm6" (OuterVolumeSpecName: "kube-api-access-4jsm6") pod "e67dca96-61b0-4b00-a504-d06571c94346" (UID: "e67dca96-61b0-4b00-a504-d06571c94346"). InnerVolumeSpecName "kube-api-access-4jsm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.591084 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e67dca96-61b0-4b00-a504-d06571c94346" (UID: "e67dca96-61b0-4b00-a504-d06571c94346"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.604427 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6p676" event={"ID":"5debaad0-76f7-41af-b13b-13844bd3b73b","Type":"ContainerDied","Data":"ab7062373d11a3f1af5dd5fac095f5eeab637e36ab568b71610d3bdccc2041d6"} Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.604505 4895 scope.go:117] "RemoveContainer" containerID="0e73fc812a988d77cc34dd07fb14cef084e0b7663aaf2f0be65f8b7be6693e2a" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.604683 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6p676" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.610595 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" event={"ID":"14c9beca-1f3d-42cb-91d2-f7e391a9761a","Type":"ContainerStarted","Data":"0f041e40df0c50c4d8bf8b7b1626828f6c385ec413b30cf1293e3a4f4d232c40"} Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.611354 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.616232 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27pfv" event={"ID":"507c5a6c-56bc-4fed-87c1-660e5786c979","Type":"ContainerDied","Data":"42c5072daacaecdc990b3f947e17664adc5e57be336c081caf06c716b8eb3bdd"} Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.616320 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27pfv" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.620365 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" event={"ID":"2fb0cdc6-64b5-432f-a998-26174db87dbb","Type":"ContainerStarted","Data":"284e024a8b760aa26a775c1f00c5e06c0de52c70307cb2d771055653dbd5ae78"} Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.620976 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.621789 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" event={"ID":"dbd2491c-2587-47d3-8201-26b8e68bfcb7","Type":"ContainerStarted","Data":"ea408429a84b5a73ccae00ac4e1fcaf12af645d9849ddf55140017d762af294b"} Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.622542 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.625218 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgtj6" event={"ID":"e67dca96-61b0-4b00-a504-d06571c94346","Type":"ContainerDied","Data":"c71f24932cf8512c4c87fb8726f3c27556900952d60b66f84f42b1d622f90da4"} Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.625418 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgtj6" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.634358 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9bm8" event={"ID":"88503b6f-3ee9-4640-80a3-c1543f577b7f","Type":"ContainerDied","Data":"86414fb725d5aebefd7a84de97d0b2a13503d236564e397a88a9beb0ab849098"} Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.634541 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9bm8" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.637278 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" podStartSLOduration=2.547850719 podStartE2EDuration="1m7.637256357s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.977230848 +0000 UTC m=+867.780208102" lastFinishedPulling="2026-01-29 16:27:29.066636476 +0000 UTC m=+932.869613740" observedRunningTime="2026-01-29 16:27:29.63037784 +0000 UTC m=+933.433355104" watchObservedRunningTime="2026-01-29 16:27:29.637256357 +0000 UTC m=+933.440233621" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.639702 4895 scope.go:117] "RemoveContainer" containerID="e42e3f5c47fed4fe4542250ccf83b67c74addecf11b2ab36dd35a7a7d59808fc" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.649944 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-catalog-content\") pod \"88503b6f-3ee9-4640-80a3-c1543f577b7f\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.650521 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnj8d\" (UniqueName: \"kubernetes.io/projected/507c5a6c-56bc-4fed-87c1-660e5786c979-kube-api-access-xnj8d\") pod \"507c5a6c-56bc-4fed-87c1-660e5786c979\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.650594 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnr5q\" (UniqueName: \"kubernetes.io/projected/88503b6f-3ee9-4640-80a3-c1543f577b7f-kube-api-access-gnr5q\") pod \"88503b6f-3ee9-4640-80a3-c1543f577b7f\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.651406 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-utilities\") pod \"507c5a6c-56bc-4fed-87c1-660e5786c979\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.651560 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-catalog-content\") pod \"507c5a6c-56bc-4fed-87c1-660e5786c979\" (UID: \"507c5a6c-56bc-4fed-87c1-660e5786c979\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.651617 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-utilities\") pod \"88503b6f-3ee9-4640-80a3-c1543f577b7f\" (UID: \"88503b6f-3ee9-4640-80a3-c1543f577b7f\") " Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.652737 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jsm6\" (UniqueName: \"kubernetes.io/projected/e67dca96-61b0-4b00-a504-d06571c94346-kube-api-access-4jsm6\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.652767 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.652778 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67dca96-61b0-4b00-a504-d06571c94346-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.656754 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-utilities" (OuterVolumeSpecName: "utilities") pod "507c5a6c-56bc-4fed-87c1-660e5786c979" (UID: "507c5a6c-56bc-4fed-87c1-660e5786c979"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.657378 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88503b6f-3ee9-4640-80a3-c1543f577b7f-kube-api-access-gnr5q" (OuterVolumeSpecName: "kube-api-access-gnr5q") pod "88503b6f-3ee9-4640-80a3-c1543f577b7f" (UID: "88503b6f-3ee9-4640-80a3-c1543f577b7f"). InnerVolumeSpecName "kube-api-access-gnr5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.658598 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507c5a6c-56bc-4fed-87c1-660e5786c979-kube-api-access-xnj8d" (OuterVolumeSpecName: "kube-api-access-xnj8d") pod "507c5a6c-56bc-4fed-87c1-660e5786c979" (UID: "507c5a6c-56bc-4fed-87c1-660e5786c979"). InnerVolumeSpecName "kube-api-access-xnj8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.659218 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-utilities" (OuterVolumeSpecName: "utilities") pod "88503b6f-3ee9-4640-80a3-c1543f577b7f" (UID: "88503b6f-3ee9-4640-80a3-c1543f577b7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.669446 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p676"] Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.684426 4895 scope.go:117] "RemoveContainer" containerID="d5eac9e91522c901cad128df442c9d796b70af3812876059321fb5c721300c75" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.685197 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6p676"] Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.687004 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" podStartSLOduration=3.590503217 podStartE2EDuration="1m8.686975644s" podCreationTimestamp="2026-01-29 16:26:21 +0000 UTC" firstStartedPulling="2026-01-29 16:26:23.974937068 +0000 UTC m=+867.777914332" lastFinishedPulling="2026-01-29 16:27:29.071409465 +0000 UTC m=+932.874386759" observedRunningTime="2026-01-29 16:27:29.682140983 +0000 UTC m=+933.485118257" watchObservedRunningTime="2026-01-29 16:27:29.686975644 +0000 UTC m=+933.489952918" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.718345 4895 scope.go:117] "RemoveContainer" containerID="317b10e4fb6aa1ef9eaa93f9cbccbde25b7be41450ef5739541723b385d69618" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.729779 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88503b6f-3ee9-4640-80a3-c1543f577b7f" (UID: "88503b6f-3ee9-4640-80a3-c1543f577b7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.738539 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" podStartSLOduration=2.53154121 podStartE2EDuration="1m7.73850449s" podCreationTimestamp="2026-01-29 16:26:22 +0000 UTC" firstStartedPulling="2026-01-29 16:26:24.009330601 +0000 UTC m=+867.812307865" lastFinishedPulling="2026-01-29 16:27:29.216293881 +0000 UTC m=+933.019271145" observedRunningTime="2026-01-29 16:27:29.71858669 +0000 UTC m=+933.521563954" watchObservedRunningTime="2026-01-29 16:27:29.73850449 +0000 UTC m=+933.541481744" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.753725 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnj8d\" (UniqueName: \"kubernetes.io/projected/507c5a6c-56bc-4fed-87c1-660e5786c979-kube-api-access-xnj8d\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.753766 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnr5q\" (UniqueName: \"kubernetes.io/projected/88503b6f-3ee9-4640-80a3-c1543f577b7f-kube-api-access-gnr5q\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.753779 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.753790 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.753801 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88503b6f-3ee9-4640-80a3-c1543f577b7f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.754780 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgtj6"] Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.756126 4895 scope.go:117] "RemoveContainer" containerID="cdebb37ff51499d66a903d406378593e41172d97032f7338f13b38ad23d1133c" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.771935 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xgtj6"] Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.786349 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "507c5a6c-56bc-4fed-87c1-660e5786c979" (UID: "507c5a6c-56bc-4fed-87c1-660e5786c979"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.795277 4895 scope.go:117] "RemoveContainer" containerID="c854da4239a56f77065d867426479a898e79f6a5ab7e4ca1fc0df3c08ca22e28" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.835033 4895 scope.go:117] "RemoveContainer" containerID="616268ab8ff9a68a56f9200542fe39760d520f7675a00c9fbd64b4e8dc6ef53f" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.854610 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/507c5a6c-56bc-4fed-87c1-660e5786c979-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.855338 4895 scope.go:117] "RemoveContainer" containerID="f0e495d3c263605d26f0867722ee0d376f117365de1f007063d97336e553a70b" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.870239 4895 scope.go:117] "RemoveContainer" containerID="518c518aa41d512f79e9457a561fc8275e3db63f5b667caabedf3349c68bef26" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.885761 4895 scope.go:117] "RemoveContainer" containerID="137ae2e0dd2550974ecd4f2a2cb7a3da84c765c4925c65e37b6ca1388f965d4b" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.911476 4895 scope.go:117] "RemoveContainer" containerID="664cc31f419279811b2553799c97f6735d30e222dbb7841f0c33d130e1c9b06d" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.936179 4895 scope.go:117] "RemoveContainer" containerID="208d929df4730864516520734a9be36458962aa0c30be2010216a54816b0e2b4" Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.961354 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-27pfv"] Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.969089 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-27pfv"] Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.975694 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9bm8"] Jan 29 16:27:29 crc kubenswrapper[4895]: I0129 16:27:29.981698 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g9bm8"] Jan 29 16:27:31 crc kubenswrapper[4895]: I0129 16:27:31.052467 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507c5a6c-56bc-4fed-87c1-660e5786c979" path="/var/lib/kubelet/pods/507c5a6c-56bc-4fed-87c1-660e5786c979/volumes" Jan 29 16:27:31 crc kubenswrapper[4895]: I0129 16:27:31.053374 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5debaad0-76f7-41af-b13b-13844bd3b73b" path="/var/lib/kubelet/pods/5debaad0-76f7-41af-b13b-13844bd3b73b/volumes" Jan 29 16:27:31 crc kubenswrapper[4895]: I0129 16:27:31.053985 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88503b6f-3ee9-4640-80a3-c1543f577b7f" path="/var/lib/kubelet/pods/88503b6f-3ee9-4640-80a3-c1543f577b7f/volumes" Jan 29 16:27:31 crc kubenswrapper[4895]: I0129 16:27:31.055133 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67dca96-61b0-4b00-a504-d06571c94346" path="/var/lib/kubelet/pods/e67dca96-61b0-4b00-a504-d06571c94346/volumes" Jan 29 16:27:42 crc kubenswrapper[4895]: I0129 16:27:42.582178 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-29rf4" Jan 29 16:27:42 crc kubenswrapper[4895]: I0129 16:27:42.761580 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-cmhwf" Jan 29 16:27:42 crc kubenswrapper[4895]: I0129 16:27:42.968670 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-b4wtv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.008393 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zrm9m"] Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009261 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerName="extract-content" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009275 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerName="extract-content" Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009287 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerName="extract-utilities" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009293 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerName="extract-utilities" Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009305 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67dca96-61b0-4b00-a504-d06571c94346" containerName="extract-utilities" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009312 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67dca96-61b0-4b00-a504-d06571c94346" containerName="extract-utilities" Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009324 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerName="extract-content" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009331 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerName="extract-content" Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009343 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerName="extract-content" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009350 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerName="extract-content" Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009361 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerName="extract-utilities" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009368 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerName="extract-utilities" Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009381 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerName="extract-utilities" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009388 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerName="extract-utilities" Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009398 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67dca96-61b0-4b00-a504-d06571c94346" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009406 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67dca96-61b0-4b00-a504-d06571c94346" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009421 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009429 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009443 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009450 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009458 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67dca96-61b0-4b00-a504-d06571c94346" containerName="extract-content" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009466 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67dca96-61b0-4b00-a504-d06571c94346" containerName="extract-content" Jan 29 16:28:00 crc kubenswrapper[4895]: E0129 16:28:00.009479 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009489 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009647 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="507c5a6c-56bc-4fed-87c1-660e5786c979" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009659 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5debaad0-76f7-41af-b13b-13844bd3b73b" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009673 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67dca96-61b0-4b00-a504-d06571c94346" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.009685 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="88503b6f-3ee9-4640-80a3-c1543f577b7f" containerName="registry-server" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.010616 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.015733 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.017225 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.017420 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.018546 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bntmx" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.031797 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zrm9m"] Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.100506 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6pkv"] Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.101636 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.110483 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.121268 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6pkv"] Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.122832 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvh29\" (UniqueName: \"kubernetes.io/projected/05c0aff5-8a3d-4393-9c20-be1c244b760d-kube-api-access-mvh29\") pod \"dnsmasq-dns-675f4bcbfc-zrm9m\" (UID: \"05c0aff5-8a3d-4393-9c20-be1c244b760d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.122907 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c0aff5-8a3d-4393-9c20-be1c244b760d-config\") pod \"dnsmasq-dns-675f4bcbfc-zrm9m\" (UID: \"05c0aff5-8a3d-4393-9c20-be1c244b760d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.224945 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-config\") pod \"dnsmasq-dns-78dd6ddcc-g6pkv\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.225033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvh29\" (UniqueName: \"kubernetes.io/projected/05c0aff5-8a3d-4393-9c20-be1c244b760d-kube-api-access-mvh29\") pod \"dnsmasq-dns-675f4bcbfc-zrm9m\" (UID: \"05c0aff5-8a3d-4393-9c20-be1c244b760d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.225170 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c0aff5-8a3d-4393-9c20-be1c244b760d-config\") pod \"dnsmasq-dns-675f4bcbfc-zrm9m\" (UID: \"05c0aff5-8a3d-4393-9c20-be1c244b760d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.225305 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56pvk\" (UniqueName: \"kubernetes.io/projected/84037826-7329-4416-831c-c6f25133e427-kube-api-access-56pvk\") pod \"dnsmasq-dns-78dd6ddcc-g6pkv\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.225329 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g6pkv\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.226521 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c0aff5-8a3d-4393-9c20-be1c244b760d-config\") pod \"dnsmasq-dns-675f4bcbfc-zrm9m\" (UID: \"05c0aff5-8a3d-4393-9c20-be1c244b760d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.247260 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvh29\" (UniqueName: \"kubernetes.io/projected/05c0aff5-8a3d-4393-9c20-be1c244b760d-kube-api-access-mvh29\") pod \"dnsmasq-dns-675f4bcbfc-zrm9m\" (UID: \"05c0aff5-8a3d-4393-9c20-be1c244b760d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.327109 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56pvk\" (UniqueName: \"kubernetes.io/projected/84037826-7329-4416-831c-c6f25133e427-kube-api-access-56pvk\") pod \"dnsmasq-dns-78dd6ddcc-g6pkv\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.327167 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g6pkv\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.327270 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-config\") pod \"dnsmasq-dns-78dd6ddcc-g6pkv\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.328256 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g6pkv\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.328370 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-config\") pod \"dnsmasq-dns-78dd6ddcc-g6pkv\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.330222 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.366093 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56pvk\" (UniqueName: \"kubernetes.io/projected/84037826-7329-4416-831c-c6f25133e427-kube-api-access-56pvk\") pod \"dnsmasq-dns-78dd6ddcc-g6pkv\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.419184 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.842979 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zrm9m"] Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.945163 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6pkv"] Jan 29 16:28:00 crc kubenswrapper[4895]: W0129 16:28:00.949706 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84037826_7329_4416_831c_c6f25133e427.slice/crio-7445a200cd15cce0c66cc25a25b1f334b21427719eaf0092f3c586d43933359a WatchSource:0}: Error finding container 7445a200cd15cce0c66cc25a25b1f334b21427719eaf0092f3c586d43933359a: Status 404 returned error can't find the container with id 7445a200cd15cce0c66cc25a25b1f334b21427719eaf0092f3c586d43933359a Jan 29 16:28:00 crc kubenswrapper[4895]: I0129 16:28:00.954281 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" event={"ID":"05c0aff5-8a3d-4393-9c20-be1c244b760d","Type":"ContainerStarted","Data":"69137573cc338ee053713dd69182155e4eb65c86a4a3229212ac2e9cead1ddca"} Jan 29 16:28:01 crc kubenswrapper[4895]: I0129 16:28:01.965580 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" event={"ID":"84037826-7329-4416-831c-c6f25133e427","Type":"ContainerStarted","Data":"7445a200cd15cce0c66cc25a25b1f334b21427719eaf0092f3c586d43933359a"} Jan 29 16:28:02 crc kubenswrapper[4895]: I0129 16:28:02.871089 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zrm9m"] Jan 29 16:28:02 crc kubenswrapper[4895]: I0129 16:28:02.911648 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wfr4d"] Jan 29 16:28:02 crc kubenswrapper[4895]: I0129 16:28:02.914660 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:02 crc kubenswrapper[4895]: I0129 16:28:02.928421 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wfr4d"] Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.091279 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wfr4d\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.091349 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-config\") pod \"dnsmasq-dns-666b6646f7-wfr4d\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.091400 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h4bk\" (UniqueName: \"kubernetes.io/projected/89288e6c-c87d-4310-967b-803ebbf81eb8-kube-api-access-6h4bk\") pod \"dnsmasq-dns-666b6646f7-wfr4d\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.165906 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6pkv"] Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.194044 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h4bk\" (UniqueName: \"kubernetes.io/projected/89288e6c-c87d-4310-967b-803ebbf81eb8-kube-api-access-6h4bk\") pod \"dnsmasq-dns-666b6646f7-wfr4d\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.199141 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wfr4d\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.199192 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-config\") pod \"dnsmasq-dns-666b6646f7-wfr4d\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.200412 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-config\") pod \"dnsmasq-dns-666b6646f7-wfr4d\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.201593 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-wfr4d\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.216370 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g2chz"] Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.218499 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.222108 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h4bk\" (UniqueName: \"kubernetes.io/projected/89288e6c-c87d-4310-967b-803ebbf81eb8-kube-api-access-6h4bk\") pod \"dnsmasq-dns-666b6646f7-wfr4d\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.234961 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g2chz"] Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.256326 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.303685 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-g2chz\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.306058 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qp2s\" (UniqueName: \"kubernetes.io/projected/1211a4fd-b5aa-41b9-8d9e-25437148b486-kube-api-access-6qp2s\") pod \"dnsmasq-dns-57d769cc4f-g2chz\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.306323 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-config\") pod \"dnsmasq-dns-57d769cc4f-g2chz\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.413731 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qp2s\" (UniqueName: \"kubernetes.io/projected/1211a4fd-b5aa-41b9-8d9e-25437148b486-kube-api-access-6qp2s\") pod \"dnsmasq-dns-57d769cc4f-g2chz\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.413843 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-config\") pod \"dnsmasq-dns-57d769cc4f-g2chz\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.413913 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-g2chz\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.415056 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-g2chz\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.415062 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-config\") pod \"dnsmasq-dns-57d769cc4f-g2chz\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.453763 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qp2s\" (UniqueName: \"kubernetes.io/projected/1211a4fd-b5aa-41b9-8d9e-25437148b486-kube-api-access-6qp2s\") pod \"dnsmasq-dns-57d769cc4f-g2chz\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.574525 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.834398 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wfr4d"] Jan 29 16:28:03 crc kubenswrapper[4895]: W0129 16:28:03.841449 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89288e6c_c87d_4310_967b_803ebbf81eb8.slice/crio-0f961579409c68838a5b8e0f3d0851d64d025979a6859da9df80b1fc6e1214f2 WatchSource:0}: Error finding container 0f961579409c68838a5b8e0f3d0851d64d025979a6859da9df80b1fc6e1214f2: Status 404 returned error can't find the container with id 0f961579409c68838a5b8e0f3d0851d64d025979a6859da9df80b1fc6e1214f2 Jan 29 16:28:03 crc kubenswrapper[4895]: I0129 16:28:03.992230 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" event={"ID":"89288e6c-c87d-4310-967b-803ebbf81eb8","Type":"ContainerStarted","Data":"0f961579409c68838a5b8e0f3d0851d64d025979a6859da9df80b1fc6e1214f2"} Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.029847 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.031020 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.033894 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.035048 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.035235 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.039743 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9r9w4" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.040352 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.040422 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.040475 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.053654 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.097545 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g2chz"] Jan 29 16:28:04 crc kubenswrapper[4895]: W0129 16:28:04.107461 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1211a4fd_b5aa_41b9_8d9e_25437148b486.slice/crio-3b74b32986c6ac99e116ea2e40b2d9300f7f85cb61b3277ba103dad5eb8d10a6 WatchSource:0}: Error finding container 3b74b32986c6ac99e116ea2e40b2d9300f7f85cb61b3277ba103dad5eb8d10a6: Status 404 returned error can't find the container with id 3b74b32986c6ac99e116ea2e40b2d9300f7f85cb61b3277ba103dad5eb8d10a6 Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.142493 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.142535 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.142554 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.142580 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.142629 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f23fdbdb-0285-4d43-b9bd-923b372eaf42-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.142653 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srw6v\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-kube-api-access-srw6v\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.142689 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.142732 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.142756 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-config-data\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.142828 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f23fdbdb-0285-4d43-b9bd-923b372eaf42-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.142847 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.243899 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.243952 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-config-data\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.243983 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f23fdbdb-0285-4d43-b9bd-923b372eaf42-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.244002 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.244033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.244047 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.244064 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.244094 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.244141 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f23fdbdb-0285-4d43-b9bd-923b372eaf42-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.244166 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srw6v\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-kube-api-access-srw6v\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.244214 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.245402 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.245791 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.247159 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.248110 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-config-data\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.248500 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.251233 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.251983 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f23fdbdb-0285-4d43-b9bd-923b372eaf42-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.253780 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.267480 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.268062 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f23fdbdb-0285-4d43-b9bd-923b372eaf42-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.273591 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srw6v\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-kube-api-access-srw6v\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.273826 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.363407 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.404763 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.407429 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.412752 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-w6b6m" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.413421 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.414917 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.415380 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.415617 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.415853 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.416333 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.418800 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.556956 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3729063-b6e8-4de8-9ab9-7448a3ec325a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.557014 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.557064 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.557092 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb46f\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-kube-api-access-nb46f\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.557121 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.557144 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.557178 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3729063-b6e8-4de8-9ab9-7448a3ec325a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.557263 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.557325 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.557389 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.557421 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.660787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.660842 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.660922 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3729063-b6e8-4de8-9ab9-7448a3ec325a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.660953 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.660981 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.661082 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.661123 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.661171 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3729063-b6e8-4de8-9ab9-7448a3ec325a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.661207 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.661292 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.661327 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb46f\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-kube-api-access-nb46f\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.661627 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.663027 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.664884 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.665644 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.668992 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.676954 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.677330 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.680450 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3729063-b6e8-4de8-9ab9-7448a3ec325a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.681461 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.682935 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3729063-b6e8-4de8-9ab9-7448a3ec325a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.692578 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb46f\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-kube-api-access-nb46f\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.709085 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.779511 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:28:04 crc kubenswrapper[4895]: I0129 16:28:04.988331 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.000359 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" event={"ID":"1211a4fd-b5aa-41b9-8d9e-25437148b486","Type":"ContainerStarted","Data":"3b74b32986c6ac99e116ea2e40b2d9300f7f85cb61b3277ba103dad5eb8d10a6"} Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.607151 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.609614 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.614813 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-z46gz" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.615219 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.616063 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.616661 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.618963 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.628860 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.791715 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.791780 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46fhh\" (UniqueName: \"kubernetes.io/projected/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-kube-api-access-46fhh\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.791823 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-kolla-config\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.791891 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-config-data-default\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.792042 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.792100 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.792118 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.792169 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.894359 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.894452 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.894477 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.894511 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.894572 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.894599 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46fhh\" (UniqueName: \"kubernetes.io/projected/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-kube-api-access-46fhh\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.894628 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-kolla-config\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.894665 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-config-data-default\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.895769 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.896802 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.898124 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.898795 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-config-data-default\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.904077 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-kolla-config\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.905230 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.933049 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46fhh\" (UniqueName: \"kubernetes.io/projected/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-kube-api-access-46fhh\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.948040 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.948754 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef5d7b98-96fe-49e9-ba5b-f662a93ce514-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ef5d7b98-96fe-49e9-ba5b-f662a93ce514\") " pod="openstack/openstack-galera-0" Jan 29 16:28:05 crc kubenswrapper[4895]: I0129 16:28:05.954842 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.103459 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.107271 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.111132 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.111567 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.111569 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.116749 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9bq29" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.124643 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.221284 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/41493083-077b-4518-a749-48a27e14b2a7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.221353 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41493083-077b-4518-a749-48a27e14b2a7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.221381 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/41493083-077b-4518-a749-48a27e14b2a7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.221425 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41493083-077b-4518-a749-48a27e14b2a7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.221483 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41493083-077b-4518-a749-48a27e14b2a7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.221527 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqp82\" (UniqueName: \"kubernetes.io/projected/41493083-077b-4518-a749-48a27e14b2a7-kube-api-access-sqp82\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.221558 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.221591 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/41493083-077b-4518-a749-48a27e14b2a7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.241654 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.243263 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.246704 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.247170 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.247396 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hdgnf" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.269153 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324086 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41493083-077b-4518-a749-48a27e14b2a7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324169 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/41493083-077b-4518-a749-48a27e14b2a7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324203 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41493083-077b-4518-a749-48a27e14b2a7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324234 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q8bz\" (UniqueName: \"kubernetes.io/projected/d22c33fb-a278-483f-ae02-d85d04ac9381-kube-api-access-9q8bz\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324258 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41493083-077b-4518-a749-48a27e14b2a7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324282 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqp82\" (UniqueName: \"kubernetes.io/projected/41493083-077b-4518-a749-48a27e14b2a7-kube-api-access-sqp82\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324302 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d22c33fb-a278-483f-ae02-d85d04ac9381-config-data\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324330 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324356 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d22c33fb-a278-483f-ae02-d85d04ac9381-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324380 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22c33fb-a278-483f-ae02-d85d04ac9381-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324403 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/41493083-077b-4518-a749-48a27e14b2a7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324428 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d22c33fb-a278-483f-ae02-d85d04ac9381-kolla-config\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324448 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/41493083-077b-4518-a749-48a27e14b2a7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.324934 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/41493083-077b-4518-a749-48a27e14b2a7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.326819 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.327160 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41493083-077b-4518-a749-48a27e14b2a7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.327418 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/41493083-077b-4518-a749-48a27e14b2a7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.327646 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/41493083-077b-4518-a749-48a27e14b2a7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.333612 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/41493083-077b-4518-a749-48a27e14b2a7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.339066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41493083-077b-4518-a749-48a27e14b2a7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.350362 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqp82\" (UniqueName: \"kubernetes.io/projected/41493083-077b-4518-a749-48a27e14b2a7-kube-api-access-sqp82\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.352543 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"41493083-077b-4518-a749-48a27e14b2a7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.426129 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d22c33fb-a278-483f-ae02-d85d04ac9381-config-data\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.426206 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d22c33fb-a278-483f-ae02-d85d04ac9381-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.426242 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22c33fb-a278-483f-ae02-d85d04ac9381-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.426284 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d22c33fb-a278-483f-ae02-d85d04ac9381-kolla-config\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.426373 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q8bz\" (UniqueName: \"kubernetes.io/projected/d22c33fb-a278-483f-ae02-d85d04ac9381-kube-api-access-9q8bz\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.427664 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d22c33fb-a278-483f-ae02-d85d04ac9381-config-data\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.429925 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d22c33fb-a278-483f-ae02-d85d04ac9381-kolla-config\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.432099 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d22c33fb-a278-483f-ae02-d85d04ac9381-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.434575 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.434743 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22c33fb-a278-483f-ae02-d85d04ac9381-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.452446 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q8bz\" (UniqueName: \"kubernetes.io/projected/d22c33fb-a278-483f-ae02-d85d04ac9381-kube-api-access-9q8bz\") pod \"memcached-0\" (UID: \"d22c33fb-a278-483f-ae02-d85d04ac9381\") " pod="openstack/memcached-0" Jan 29 16:28:07 crc kubenswrapper[4895]: I0129 16:28:07.561757 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 16:28:08 crc kubenswrapper[4895]: W0129 16:28:08.498064 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf23fdbdb_0285_4d43_b9bd_923b372eaf42.slice/crio-57ea66f1cb0eadb044aaa3b204203612ac7b0a6518ed4e90e6c7870a1fde3afb WatchSource:0}: Error finding container 57ea66f1cb0eadb044aaa3b204203612ac7b0a6518ed4e90e6c7870a1fde3afb: Status 404 returned error can't find the container with id 57ea66f1cb0eadb044aaa3b204203612ac7b0a6518ed4e90e6c7870a1fde3afb Jan 29 16:28:09 crc kubenswrapper[4895]: I0129 16:28:09.066163 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f23fdbdb-0285-4d43-b9bd-923b372eaf42","Type":"ContainerStarted","Data":"57ea66f1cb0eadb044aaa3b204203612ac7b0a6518ed4e90e6c7870a1fde3afb"} Jan 29 16:28:09 crc kubenswrapper[4895]: I0129 16:28:09.259271 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:28:09 crc kubenswrapper[4895]: I0129 16:28:09.260720 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:28:09 crc kubenswrapper[4895]: I0129 16:28:09.264574 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-4pgv4" Jan 29 16:28:09 crc kubenswrapper[4895]: I0129 16:28:09.294402 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:28:09 crc kubenswrapper[4895]: I0129 16:28:09.374465 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kktng\" (UniqueName: \"kubernetes.io/projected/f3ac4bcf-7c2a-48e7-9921-3035bdc8f488-kube-api-access-kktng\") pod \"kube-state-metrics-0\" (UID: \"f3ac4bcf-7c2a-48e7-9921-3035bdc8f488\") " pod="openstack/kube-state-metrics-0" Jan 29 16:28:09 crc kubenswrapper[4895]: I0129 16:28:09.479382 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kktng\" (UniqueName: \"kubernetes.io/projected/f3ac4bcf-7c2a-48e7-9921-3035bdc8f488-kube-api-access-kktng\") pod \"kube-state-metrics-0\" (UID: \"f3ac4bcf-7c2a-48e7-9921-3035bdc8f488\") " pod="openstack/kube-state-metrics-0" Jan 29 16:28:09 crc kubenswrapper[4895]: I0129 16:28:09.502647 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kktng\" (UniqueName: \"kubernetes.io/projected/f3ac4bcf-7c2a-48e7-9921-3035bdc8f488-kube-api-access-kktng\") pod \"kube-state-metrics-0\" (UID: \"f3ac4bcf-7c2a-48e7-9921-3035bdc8f488\") " pod="openstack/kube-state-metrics-0" Jan 29 16:28:09 crc kubenswrapper[4895]: I0129 16:28:09.597499 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.878483 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7h6nr"] Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.882316 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.893030 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.894237 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.894453 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pgjk2" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.915416 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7h6nr"] Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.961561 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.963193 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-scripts\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.963278 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-var-log-ovn\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.963337 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-var-run-ovn\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.963413 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc76x\" (UniqueName: \"kubernetes.io/projected/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-kube-api-access-pc76x\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.963480 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-ovn-controller-tls-certs\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.963524 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-var-run\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.963570 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-combined-ca-bundle\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.978939 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9d8jr"] Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.981281 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:12 crc kubenswrapper[4895]: I0129 16:28:12.991405 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9d8jr"] Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065202 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-ovn-controller-tls-certs\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065277 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dddd2\" (UniqueName: \"kubernetes.io/projected/1b3f9699-0154-45bb-a444-85cc44faac88-kube-api-access-dddd2\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065316 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-var-run\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065359 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-combined-ca-bundle\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065380 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b3f9699-0154-45bb-a444-85cc44faac88-scripts\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065408 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-scripts\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065434 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-var-log-ovn\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065458 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-var-log\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065491 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-var-run-ovn\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065511 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-var-run\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065532 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-var-lib\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065562 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc76x\" (UniqueName: \"kubernetes.io/projected/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-kube-api-access-pc76x\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.065605 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-etc-ovs\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.066950 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-var-log-ovn\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.067026 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-var-run-ovn\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.067151 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-var-run\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.069889 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-scripts\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.075108 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-combined-ca-bundle\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.091258 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc76x\" (UniqueName: \"kubernetes.io/projected/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-kube-api-access-pc76x\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.096289 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ab7a53-0bcb-4f36-baa2-8d125d379bd3-ovn-controller-tls-certs\") pod \"ovn-controller-7h6nr\" (UID: \"37ab7a53-0bcb-4f36-baa2-8d125d379bd3\") " pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.167189 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-var-log\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.167272 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-var-run\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.167301 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-var-lib\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.167380 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-etc-ovs\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.167430 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dddd2\" (UniqueName: \"kubernetes.io/projected/1b3f9699-0154-45bb-a444-85cc44faac88-kube-api-access-dddd2\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.167505 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b3f9699-0154-45bb-a444-85cc44faac88-scripts\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.168445 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-var-log\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.168598 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-var-run\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.168799 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-var-lib\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.169746 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1b3f9699-0154-45bb-a444-85cc44faac88-etc-ovs\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.171370 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b3f9699-0154-45bb-a444-85cc44faac88-scripts\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.202505 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dddd2\" (UniqueName: \"kubernetes.io/projected/1b3f9699-0154-45bb-a444-85cc44faac88-kube-api-access-dddd2\") pod \"ovn-controller-ovs-9d8jr\" (UID: \"1b3f9699-0154-45bb-a444-85cc44faac88\") " pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.205321 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.306827 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.755988 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.758225 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.774529 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.778975 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-jlt2q" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.779727 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.779881 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.780066 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.780333 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.885790 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/896fe284-9834-4c99-b82f-1f13cb4b3857-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.887315 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/896fe284-9834-4c99-b82f-1f13cb4b3857-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.887468 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/896fe284-9834-4c99-b82f-1f13cb4b3857-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.887807 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/896fe284-9834-4c99-b82f-1f13cb4b3857-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.888131 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/896fe284-9834-4c99-b82f-1f13cb4b3857-config\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.888423 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/896fe284-9834-4c99-b82f-1f13cb4b3857-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.888665 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.889542 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f96lp\" (UniqueName: \"kubernetes.io/projected/896fe284-9834-4c99-b82f-1f13cb4b3857-kube-api-access-f96lp\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.992556 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/896fe284-9834-4c99-b82f-1f13cb4b3857-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.992623 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/896fe284-9834-4c99-b82f-1f13cb4b3857-config\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.992667 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/896fe284-9834-4c99-b82f-1f13cb4b3857-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.992701 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.992758 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f96lp\" (UniqueName: \"kubernetes.io/projected/896fe284-9834-4c99-b82f-1f13cb4b3857-kube-api-access-f96lp\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.992837 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/896fe284-9834-4c99-b82f-1f13cb4b3857-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.993634 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/896fe284-9834-4c99-b82f-1f13cb4b3857-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.993832 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/896fe284-9834-4c99-b82f-1f13cb4b3857-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.993969 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.995289 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/896fe284-9834-4c99-b82f-1f13cb4b3857-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.996313 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/896fe284-9834-4c99-b82f-1f13cb4b3857-config\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:13 crc kubenswrapper[4895]: I0129 16:28:13.996533 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/896fe284-9834-4c99-b82f-1f13cb4b3857-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:14 crc kubenswrapper[4895]: I0129 16:28:14.003221 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/896fe284-9834-4c99-b82f-1f13cb4b3857-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:14 crc kubenswrapper[4895]: I0129 16:28:14.010815 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/896fe284-9834-4c99-b82f-1f13cb4b3857-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:14 crc kubenswrapper[4895]: I0129 16:28:14.010841 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/896fe284-9834-4c99-b82f-1f13cb4b3857-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:14 crc kubenswrapper[4895]: I0129 16:28:14.014546 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f96lp\" (UniqueName: \"kubernetes.io/projected/896fe284-9834-4c99-b82f-1f13cb4b3857-kube-api-access-f96lp\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:14 crc kubenswrapper[4895]: I0129 16:28:14.055381 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"896fe284-9834-4c99-b82f-1f13cb4b3857\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:14 crc kubenswrapper[4895]: I0129 16:28:14.119847 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.855710 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.860741 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.865366 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.866092 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xlmsm" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.867178 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.867808 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.873423 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.973255 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ffe9927-329d-4120-b676-a27782b60e94-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.973634 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ffe9927-329d-4120-b676-a27782b60e94-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.973765 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ffe9927-329d-4120-b676-a27782b60e94-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.973838 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.973888 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwrlq\" (UniqueName: \"kubernetes.io/projected/1ffe9927-329d-4120-b676-a27782b60e94-kube-api-access-hwrlq\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.973925 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ffe9927-329d-4120-b676-a27782b60e94-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.973960 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ffe9927-329d-4120-b676-a27782b60e94-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:16 crc kubenswrapper[4895]: I0129 16:28:16.974086 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffe9927-329d-4120-b676-a27782b60e94-config\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.076539 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ffe9927-329d-4120-b676-a27782b60e94-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.076608 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ffe9927-329d-4120-b676-a27782b60e94-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.076682 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffe9927-329d-4120-b676-a27782b60e94-config\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.076734 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ffe9927-329d-4120-b676-a27782b60e94-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.076771 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ffe9927-329d-4120-b676-a27782b60e94-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.076804 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ffe9927-329d-4120-b676-a27782b60e94-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.076901 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.076947 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwrlq\" (UniqueName: \"kubernetes.io/projected/1ffe9927-329d-4120-b676-a27782b60e94-kube-api-access-hwrlq\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.078842 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.080069 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffe9927-329d-4120-b676-a27782b60e94-config\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.080589 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ffe9927-329d-4120-b676-a27782b60e94-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.081085 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ffe9927-329d-4120-b676-a27782b60e94-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.085988 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ffe9927-329d-4120-b676-a27782b60e94-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.086787 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ffe9927-329d-4120-b676-a27782b60e94-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.096622 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ffe9927-329d-4120-b676-a27782b60e94-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.099671 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwrlq\" (UniqueName: \"kubernetes.io/projected/1ffe9927-329d-4120-b676-a27782b60e94-kube-api-access-hwrlq\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.102503 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ffe9927-329d-4120-b676-a27782b60e94\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:17 crc kubenswrapper[4895]: I0129 16:28:17.200697 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:20 crc kubenswrapper[4895]: W0129 16:28:20.847773 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41493083_077b_4518_a749_48a27e14b2a7.slice/crio-35bdd0f2cfc6c07bcbed8e8383a7ef8476db733ad9ddf6e2d70f92deba4b61c3 WatchSource:0}: Error finding container 35bdd0f2cfc6c07bcbed8e8383a7ef8476db733ad9ddf6e2d70f92deba4b61c3: Status 404 returned error can't find the container with id 35bdd0f2cfc6c07bcbed8e8383a7ef8476db733ad9ddf6e2d70f92deba4b61c3 Jan 29 16:28:21 crc kubenswrapper[4895]: I0129 16:28:21.200470 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41493083-077b-4518-a749-48a27e14b2a7","Type":"ContainerStarted","Data":"35bdd0f2cfc6c07bcbed8e8383a7ef8476db733ad9ddf6e2d70f92deba4b61c3"} Jan 29 16:28:21 crc kubenswrapper[4895]: I0129 16:28:21.337774 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:28:21 crc kubenswrapper[4895]: E0129 16:28:21.779914 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 16:28:21 crc kubenswrapper[4895]: E0129 16:28:21.781176 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvh29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-zrm9m_openstack(05c0aff5-8a3d-4393-9c20-be1c244b760d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:28:21 crc kubenswrapper[4895]: E0129 16:28:21.782450 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" podUID="05c0aff5-8a3d-4393-9c20-be1c244b760d" Jan 29 16:28:23 crc kubenswrapper[4895]: W0129 16:28:23.641343 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef5d7b98_96fe_49e9_ba5b_f662a93ce514.slice/crio-5125d3d0f64c3169e0f2d53922dd3f8c51fb1a8ebaa444c82cfebc34deae3fa0 WatchSource:0}: Error finding container 5125d3d0f64c3169e0f2d53922dd3f8c51fb1a8ebaa444c82cfebc34deae3fa0: Status 404 returned error can't find the container with id 5125d3d0f64c3169e0f2d53922dd3f8c51fb1a8ebaa444c82cfebc34deae3fa0 Jan 29 16:28:23 crc kubenswrapper[4895]: E0129 16:28:23.641530 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 29 16:28:23 crc kubenswrapper[4895]: E0129 16:28:23.642406 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srw6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(f23fdbdb-0285-4d43-b9bd-923b372eaf42): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:28:23 crc kubenswrapper[4895]: E0129 16:28:23.643540 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="f23fdbdb-0285-4d43-b9bd-923b372eaf42" Jan 29 16:28:23 crc kubenswrapper[4895]: E0129 16:28:23.789052 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 16:28:23 crc kubenswrapper[4895]: E0129 16:28:23.789356 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56pvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-g6pkv_openstack(84037826-7329-4416-831c-c6f25133e427): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:28:23 crc kubenswrapper[4895]: E0129 16:28:23.790671 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" podUID="84037826-7329-4416-831c-c6f25133e427" Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.106650 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.134772 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.234287 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ef5d7b98-96fe-49e9-ba5b-f662a93ce514","Type":"ContainerStarted","Data":"5125d3d0f64c3169e0f2d53922dd3f8c51fb1a8ebaa444c82cfebc34deae3fa0"} Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.238781 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.238979 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-zrm9m" event={"ID":"05c0aff5-8a3d-4393-9c20-be1c244b760d","Type":"ContainerDied","Data":"69137573cc338ee053713dd69182155e4eb65c86a4a3229212ac2e9cead1ddca"} Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.244027 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvh29\" (UniqueName: \"kubernetes.io/projected/05c0aff5-8a3d-4393-9c20-be1c244b760d-kube-api-access-mvh29\") pod \"05c0aff5-8a3d-4393-9c20-be1c244b760d\" (UID: \"05c0aff5-8a3d-4393-9c20-be1c244b760d\") " Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.244333 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c0aff5-8a3d-4393-9c20-be1c244b760d-config\") pod \"05c0aff5-8a3d-4393-9c20-be1c244b760d\" (UID: \"05c0aff5-8a3d-4393-9c20-be1c244b760d\") " Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.247195 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05c0aff5-8a3d-4393-9c20-be1c244b760d-config" (OuterVolumeSpecName: "config") pod "05c0aff5-8a3d-4393-9c20-be1c244b760d" (UID: "05c0aff5-8a3d-4393-9c20-be1c244b760d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.257436 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c0aff5-8a3d-4393-9c20-be1c244b760d-kube-api-access-mvh29" (OuterVolumeSpecName: "kube-api-access-mvh29") pod "05c0aff5-8a3d-4393-9c20-be1c244b760d" (UID: "05c0aff5-8a3d-4393-9c20-be1c244b760d"). InnerVolumeSpecName "kube-api-access-mvh29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.347312 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05c0aff5-8a3d-4393-9c20-be1c244b760d-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.347357 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvh29\" (UniqueName: \"kubernetes.io/projected/05c0aff5-8a3d-4393-9c20-be1c244b760d-kube-api-access-mvh29\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.557260 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.612784 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zrm9m"] Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.629358 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zrm9m"] Jan 29 16:28:24 crc kubenswrapper[4895]: I0129 16:28:24.664970 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:28:25 crc kubenswrapper[4895]: I0129 16:28:25.050663 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c0aff5-8a3d-4393-9c20-be1c244b760d" path="/var/lib/kubelet/pods/05c0aff5-8a3d-4393-9c20-be1c244b760d/volumes" Jan 29 16:28:25 crc kubenswrapper[4895]: E0129 16:28:25.272487 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="f23fdbdb-0285-4d43-b9bd-923b372eaf42" Jan 29 16:28:26 crc kubenswrapper[4895]: W0129 16:28:26.054374 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd22c33fb_a278_483f_ae02_d85d04ac9381.slice/crio-0383293b829985b0ef4a685277cd4575f86b255f53ccf31f828485d23bdbf78c WatchSource:0}: Error finding container 0383293b829985b0ef4a685277cd4575f86b255f53ccf31f828485d23bdbf78c: Status 404 returned error can't find the container with id 0383293b829985b0ef4a685277cd4575f86b255f53ccf31f828485d23bdbf78c Jan 29 16:28:26 crc kubenswrapper[4895]: W0129 16:28:26.055322 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3ac4bcf_7c2a_48e7_9921_3035bdc8f488.slice/crio-29be0e83256f2fc2c54d7d29021386d10effea33def2950e2c278b149edec722 WatchSource:0}: Error finding container 29be0e83256f2fc2c54d7d29021386d10effea33def2950e2c278b149edec722: Status 404 returned error can't find the container with id 29be0e83256f2fc2c54d7d29021386d10effea33def2950e2c278b149edec722 Jan 29 16:28:26 crc kubenswrapper[4895]: W0129 16:28:26.061432 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3729063_b6e8_4de8_9ab9_7448a3ec325a.slice/crio-ad02378dc6116e496fbf7b72c9f5b5fbc857f24e8fc460c1d352e028213ce12a WatchSource:0}: Error finding container ad02378dc6116e496fbf7b72c9f5b5fbc857f24e8fc460c1d352e028213ce12a: Status 404 returned error can't find the container with id ad02378dc6116e496fbf7b72c9f5b5fbc857f24e8fc460c1d352e028213ce12a Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.125699 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.196571 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-dns-svc\") pod \"84037826-7329-4416-831c-c6f25133e427\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.196767 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-config\") pod \"84037826-7329-4416-831c-c6f25133e427\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.196889 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56pvk\" (UniqueName: \"kubernetes.io/projected/84037826-7329-4416-831c-c6f25133e427-kube-api-access-56pvk\") pod \"84037826-7329-4416-831c-c6f25133e427\" (UID: \"84037826-7329-4416-831c-c6f25133e427\") " Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.198915 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84037826-7329-4416-831c-c6f25133e427" (UID: "84037826-7329-4416-831c-c6f25133e427"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.199735 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-config" (OuterVolumeSpecName: "config") pod "84037826-7329-4416-831c-c6f25133e427" (UID: "84037826-7329-4416-831c-c6f25133e427"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.205098 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84037826-7329-4416-831c-c6f25133e427-kube-api-access-56pvk" (OuterVolumeSpecName: "kube-api-access-56pvk") pod "84037826-7329-4416-831c-c6f25133e427" (UID: "84037826-7329-4416-831c-c6f25133e427"). InnerVolumeSpecName "kube-api-access-56pvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.267605 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d22c33fb-a278-483f-ae02-d85d04ac9381","Type":"ContainerStarted","Data":"0383293b829985b0ef4a685277cd4575f86b255f53ccf31f828485d23bdbf78c"} Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.269398 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3ac4bcf-7c2a-48e7-9921-3035bdc8f488","Type":"ContainerStarted","Data":"29be0e83256f2fc2c54d7d29021386d10effea33def2950e2c278b149edec722"} Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.271172 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3729063-b6e8-4de8-9ab9-7448a3ec325a","Type":"ContainerStarted","Data":"ad02378dc6116e496fbf7b72c9f5b5fbc857f24e8fc460c1d352e028213ce12a"} Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.272893 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" event={"ID":"84037826-7329-4416-831c-c6f25133e427","Type":"ContainerDied","Data":"7445a200cd15cce0c66cc25a25b1f334b21427719eaf0092f3c586d43933359a"} Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.272978 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g6pkv" Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.298505 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.298543 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84037826-7329-4416-831c-c6f25133e427-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.298560 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56pvk\" (UniqueName: \"kubernetes.io/projected/84037826-7329-4416-831c-c6f25133e427-kube-api-access-56pvk\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.337244 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6pkv"] Jan 29 16:28:26 crc kubenswrapper[4895]: I0129 16:28:26.345484 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g6pkv"] Jan 29 16:28:27 crc kubenswrapper[4895]: I0129 16:28:27.051309 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84037826-7329-4416-831c-c6f25133e427" path="/var/lib/kubelet/pods/84037826-7329-4416-831c-c6f25133e427/volumes" Jan 29 16:28:27 crc kubenswrapper[4895]: I0129 16:28:27.822912 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:28:27 crc kubenswrapper[4895]: I0129 16:28:27.823877 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:28:28 crc kubenswrapper[4895]: I0129 16:28:28.010519 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:28:28 crc kubenswrapper[4895]: W0129 16:28:28.016155 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ffe9927_329d_4120_b676_a27782b60e94.slice/crio-2de8ccbe6b65eefd137052a0a952df9b2e114cdf4249f0a9a5bb65686bc698eb WatchSource:0}: Error finding container 2de8ccbe6b65eefd137052a0a952df9b2e114cdf4249f0a9a5bb65686bc698eb: Status 404 returned error can't find the container with id 2de8ccbe6b65eefd137052a0a952df9b2e114cdf4249f0a9a5bb65686bc698eb Jan 29 16:28:28 crc kubenswrapper[4895]: I0129 16:28:28.045853 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7h6nr"] Jan 29 16:28:28 crc kubenswrapper[4895]: I0129 16:28:28.107436 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:28:28 crc kubenswrapper[4895]: W0129 16:28:28.121848 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod896fe284_9834_4c99_b82f_1f13cb4b3857.slice/crio-988ef46ff7fac78d5c9b4178f50fe2e925041ab99f9a140fab33125a99b67b4a WatchSource:0}: Error finding container 988ef46ff7fac78d5c9b4178f50fe2e925041ab99f9a140fab33125a99b67b4a: Status 404 returned error can't find the container with id 988ef46ff7fac78d5c9b4178f50fe2e925041ab99f9a140fab33125a99b67b4a Jan 29 16:28:28 crc kubenswrapper[4895]: I0129 16:28:28.187416 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9d8jr"] Jan 29 16:28:28 crc kubenswrapper[4895]: W0129 16:28:28.193847 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b3f9699_0154_45bb_a444_85cc44faac88.slice/crio-5e2c6bc5b9349decd41a020a01d7988d890ea25bd6f68b4b6694ecf6e4159309 WatchSource:0}: Error finding container 5e2c6bc5b9349decd41a020a01d7988d890ea25bd6f68b4b6694ecf6e4159309: Status 404 returned error can't find the container with id 5e2c6bc5b9349decd41a020a01d7988d890ea25bd6f68b4b6694ecf6e4159309 Jan 29 16:28:28 crc kubenswrapper[4895]: I0129 16:28:28.295523 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9d8jr" event={"ID":"1b3f9699-0154-45bb-a444-85cc44faac88","Type":"ContainerStarted","Data":"5e2c6bc5b9349decd41a020a01d7988d890ea25bd6f68b4b6694ecf6e4159309"} Jan 29 16:28:28 crc kubenswrapper[4895]: I0129 16:28:28.296665 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1ffe9927-329d-4120-b676-a27782b60e94","Type":"ContainerStarted","Data":"2de8ccbe6b65eefd137052a0a952df9b2e114cdf4249f0a9a5bb65686bc698eb"} Jan 29 16:28:28 crc kubenswrapper[4895]: I0129 16:28:28.298534 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"896fe284-9834-4c99-b82f-1f13cb4b3857","Type":"ContainerStarted","Data":"988ef46ff7fac78d5c9b4178f50fe2e925041ab99f9a140fab33125a99b67b4a"} Jan 29 16:28:28 crc kubenswrapper[4895]: I0129 16:28:28.300015 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7h6nr" event={"ID":"37ab7a53-0bcb-4f36-baa2-8d125d379bd3","Type":"ContainerStarted","Data":"5f5cb73ff72bba91af98128160dfbffdf82059c5fc183b1c0e9c23fa48c18316"} Jan 29 16:28:29 crc kubenswrapper[4895]: I0129 16:28:29.314803 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" event={"ID":"1211a4fd-b5aa-41b9-8d9e-25437148b486","Type":"ContainerStarted","Data":"5e06a4da9f2e4ee9f85d16205907044360814158045b3a5ba709c5f854418636"} Jan 29 16:28:29 crc kubenswrapper[4895]: I0129 16:28:29.320845 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" event={"ID":"89288e6c-c87d-4310-967b-803ebbf81eb8","Type":"ContainerStarted","Data":"71cfa52bffea4c95af3fb2cdadb2504befa90e5ee96505d3e66d059df9ff1d57"} Jan 29 16:28:30 crc kubenswrapper[4895]: I0129 16:28:30.332262 4895 generic.go:334] "Generic (PLEG): container finished" podID="1211a4fd-b5aa-41b9-8d9e-25437148b486" containerID="5e06a4da9f2e4ee9f85d16205907044360814158045b3a5ba709c5f854418636" exitCode=0 Jan 29 16:28:30 crc kubenswrapper[4895]: I0129 16:28:30.332355 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" event={"ID":"1211a4fd-b5aa-41b9-8d9e-25437148b486","Type":"ContainerDied","Data":"5e06a4da9f2e4ee9f85d16205907044360814158045b3a5ba709c5f854418636"} Jan 29 16:28:30 crc kubenswrapper[4895]: I0129 16:28:30.334222 4895 generic.go:334] "Generic (PLEG): container finished" podID="89288e6c-c87d-4310-967b-803ebbf81eb8" containerID="71cfa52bffea4c95af3fb2cdadb2504befa90e5ee96505d3e66d059df9ff1d57" exitCode=0 Jan 29 16:28:30 crc kubenswrapper[4895]: I0129 16:28:30.334243 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" event={"ID":"89288e6c-c87d-4310-967b-803ebbf81eb8","Type":"ContainerDied","Data":"71cfa52bffea4c95af3fb2cdadb2504befa90e5ee96505d3e66d059df9ff1d57"} Jan 29 16:28:36 crc kubenswrapper[4895]: I0129 16:28:36.391391 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" event={"ID":"89288e6c-c87d-4310-967b-803ebbf81eb8","Type":"ContainerStarted","Data":"22bef0094f204b14bf391b33a6546b2e1d875007615e09e6fb7add9abc8fcd11"} Jan 29 16:28:36 crc kubenswrapper[4895]: I0129 16:28:36.392017 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:36 crc kubenswrapper[4895]: I0129 16:28:36.394480 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" event={"ID":"1211a4fd-b5aa-41b9-8d9e-25437148b486","Type":"ContainerStarted","Data":"16161f0ee5ab15262ec56eea45a2b8eb683cf442f83499cf963598c8e6509d39"} Jan 29 16:28:36 crc kubenswrapper[4895]: I0129 16:28:36.394640 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:36 crc kubenswrapper[4895]: I0129 16:28:36.414881 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" podStartSLOduration=10.917612495 podStartE2EDuration="34.414827413s" podCreationTimestamp="2026-01-29 16:28:02 +0000 UTC" firstStartedPulling="2026-01-29 16:28:03.846374498 +0000 UTC m=+967.649351762" lastFinishedPulling="2026-01-29 16:28:27.343589416 +0000 UTC m=+991.146566680" observedRunningTime="2026-01-29 16:28:36.41360915 +0000 UTC m=+1000.216586414" watchObservedRunningTime="2026-01-29 16:28:36.414827413 +0000 UTC m=+1000.217804677" Jan 29 16:28:36 crc kubenswrapper[4895]: I0129 16:28:36.440241 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" podStartSLOduration=10.192583284 podStartE2EDuration="33.44021599s" podCreationTimestamp="2026-01-29 16:28:03 +0000 UTC" firstStartedPulling="2026-01-29 16:28:04.115550851 +0000 UTC m=+967.918528115" lastFinishedPulling="2026-01-29 16:28:27.363183557 +0000 UTC m=+991.166160821" observedRunningTime="2026-01-29 16:28:36.437942968 +0000 UTC m=+1000.240920242" watchObservedRunningTime="2026-01-29 16:28:36.44021599 +0000 UTC m=+1000.243193264" Jan 29 16:28:38 crc kubenswrapper[4895]: E0129 16:28:38.765021 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 29 16:28:38 crc kubenswrapper[4895]: E0129 16:28:38.765274 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqp82,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(41493083-077b-4518-a749-48a27e14b2a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:28:38 crc kubenswrapper[4895]: E0129 16:28:38.766496 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="41493083-077b-4518-a749-48a27e14b2a7" Jan 29 16:28:39 crc kubenswrapper[4895]: E0129 16:28:39.430817 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="41493083-077b-4518-a749-48a27e14b2a7" Jan 29 16:28:43 crc kubenswrapper[4895]: I0129 16:28:43.259343 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:43 crc kubenswrapper[4895]: I0129 16:28:43.577792 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:28:43 crc kubenswrapper[4895]: I0129 16:28:43.650142 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wfr4d"] Jan 29 16:28:43 crc kubenswrapper[4895]: I0129 16:28:43.650455 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" podUID="89288e6c-c87d-4310-967b-803ebbf81eb8" containerName="dnsmasq-dns" containerID="cri-o://22bef0094f204b14bf391b33a6546b2e1d875007615e09e6fb7add9abc8fcd11" gracePeriod=10 Jan 29 16:28:44 crc kubenswrapper[4895]: I0129 16:28:44.505156 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ef5d7b98-96fe-49e9-ba5b-f662a93ce514","Type":"ContainerStarted","Data":"8c48543cc9be1fbbe0591ba4f469908a6d2399257045afcbdb69471220f0f1ed"} Jan 29 16:28:44 crc kubenswrapper[4895]: I0129 16:28:44.532145 4895 generic.go:334] "Generic (PLEG): container finished" podID="89288e6c-c87d-4310-967b-803ebbf81eb8" containerID="22bef0094f204b14bf391b33a6546b2e1d875007615e09e6fb7add9abc8fcd11" exitCode=0 Jan 29 16:28:44 crc kubenswrapper[4895]: I0129 16:28:44.532221 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" event={"ID":"89288e6c-c87d-4310-967b-803ebbf81eb8","Type":"ContainerDied","Data":"22bef0094f204b14bf391b33a6546b2e1d875007615e09e6fb7add9abc8fcd11"} Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.050479 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.169609 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-dns-svc\") pod \"89288e6c-c87d-4310-967b-803ebbf81eb8\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.169694 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-config\") pod \"89288e6c-c87d-4310-967b-803ebbf81eb8\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.169837 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h4bk\" (UniqueName: \"kubernetes.io/projected/89288e6c-c87d-4310-967b-803ebbf81eb8-kube-api-access-6h4bk\") pod \"89288e6c-c87d-4310-967b-803ebbf81eb8\" (UID: \"89288e6c-c87d-4310-967b-803ebbf81eb8\") " Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.179497 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89288e6c-c87d-4310-967b-803ebbf81eb8-kube-api-access-6h4bk" (OuterVolumeSpecName: "kube-api-access-6h4bk") pod "89288e6c-c87d-4310-967b-803ebbf81eb8" (UID: "89288e6c-c87d-4310-967b-803ebbf81eb8"). InnerVolumeSpecName "kube-api-access-6h4bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.244035 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89288e6c-c87d-4310-967b-803ebbf81eb8" (UID: "89288e6c-c87d-4310-967b-803ebbf81eb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.249981 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-config" (OuterVolumeSpecName: "config") pod "89288e6c-c87d-4310-967b-803ebbf81eb8" (UID: "89288e6c-c87d-4310-967b-803ebbf81eb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.271856 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.271932 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89288e6c-c87d-4310-967b-803ebbf81eb8-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.271947 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h4bk\" (UniqueName: \"kubernetes.io/projected/89288e6c-c87d-4310-967b-803ebbf81eb8-kube-api-access-6h4bk\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.545088 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.545073 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-wfr4d" event={"ID":"89288e6c-c87d-4310-967b-803ebbf81eb8","Type":"ContainerDied","Data":"0f961579409c68838a5b8e0f3d0851d64d025979a6859da9df80b1fc6e1214f2"} Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.545198 4895 scope.go:117] "RemoveContainer" containerID="22bef0094f204b14bf391b33a6546b2e1d875007615e09e6fb7add9abc8fcd11" Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.549845 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9d8jr" event={"ID":"1b3f9699-0154-45bb-a444-85cc44faac88","Type":"ContainerStarted","Data":"767a05f83d4d15df06bc86bd216cdea5ada94805275eb65473df40f7df5506f0"} Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.554627 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d22c33fb-a278-483f-ae02-d85d04ac9381","Type":"ContainerStarted","Data":"7832ccc87edb513ea527b92ab38b19d46b3428f61707ebb6825cb94524264d88"} Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.555503 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.559767 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f23fdbdb-0285-4d43-b9bd-923b372eaf42","Type":"ContainerStarted","Data":"703927b788d49dd2fbc7dcbeded873e6df74abef9151860b8f1949f49dd98c6a"} Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.650804 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.305308475 podStartE2EDuration="38.65077994s" podCreationTimestamp="2026-01-29 16:28:07 +0000 UTC" firstStartedPulling="2026-01-29 16:28:26.061214151 +0000 UTC m=+989.864191415" lastFinishedPulling="2026-01-29 16:28:43.406685626 +0000 UTC m=+1007.209662880" observedRunningTime="2026-01-29 16:28:45.615852855 +0000 UTC m=+1009.418830139" watchObservedRunningTime="2026-01-29 16:28:45.65077994 +0000 UTC m=+1009.453757204" Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.666194 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wfr4d"] Jan 29 16:28:45 crc kubenswrapper[4895]: I0129 16:28:45.691291 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-wfr4d"] Jan 29 16:28:46 crc kubenswrapper[4895]: I0129 16:28:46.233013 4895 scope.go:117] "RemoveContainer" containerID="71cfa52bffea4c95af3fb2cdadb2504befa90e5ee96505d3e66d059df9ff1d57" Jan 29 16:28:46 crc kubenswrapper[4895]: I0129 16:28:46.569788 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"896fe284-9834-4c99-b82f-1f13cb4b3857","Type":"ContainerStarted","Data":"95d873d5430f2bc404dcb182ca95d3544cd6d361c7841718735d4ddec696132d"} Jan 29 16:28:46 crc kubenswrapper[4895]: I0129 16:28:46.573264 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3729063-b6e8-4de8-9ab9-7448a3ec325a","Type":"ContainerStarted","Data":"dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8"} Jan 29 16:28:46 crc kubenswrapper[4895]: I0129 16:28:46.582078 4895 generic.go:334] "Generic (PLEG): container finished" podID="1b3f9699-0154-45bb-a444-85cc44faac88" containerID="767a05f83d4d15df06bc86bd216cdea5ada94805275eb65473df40f7df5506f0" exitCode=0 Jan 29 16:28:46 crc kubenswrapper[4895]: I0129 16:28:46.582310 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9d8jr" event={"ID":"1b3f9699-0154-45bb-a444-85cc44faac88","Type":"ContainerDied","Data":"767a05f83d4d15df06bc86bd216cdea5ada94805275eb65473df40f7df5506f0"} Jan 29 16:28:46 crc kubenswrapper[4895]: I0129 16:28:46.586398 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1ffe9927-329d-4120-b676-a27782b60e94","Type":"ContainerStarted","Data":"305cded2d9bdcf2936b105bf75b0869e4ae19a51fa75bfef49000caa0f31308e"} Jan 29 16:28:47 crc kubenswrapper[4895]: I0129 16:28:47.065758 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89288e6c-c87d-4310-967b-803ebbf81eb8" path="/var/lib/kubelet/pods/89288e6c-c87d-4310-967b-803ebbf81eb8/volumes" Jan 29 16:28:47 crc kubenswrapper[4895]: I0129 16:28:47.599381 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3ac4bcf-7c2a-48e7-9921-3035bdc8f488","Type":"ContainerStarted","Data":"b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb"} Jan 29 16:28:47 crc kubenswrapper[4895]: I0129 16:28:47.600118 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 16:28:47 crc kubenswrapper[4895]: I0129 16:28:47.603253 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7h6nr" event={"ID":"37ab7a53-0bcb-4f36-baa2-8d125d379bd3","Type":"ContainerStarted","Data":"b8e2a0fc9c80113531658cd33f158a9292f69d1530d9c12e28cac7a537336541"} Jan 29 16:28:47 crc kubenswrapper[4895]: I0129 16:28:47.603500 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7h6nr" Jan 29 16:28:47 crc kubenswrapper[4895]: I0129 16:28:47.609351 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9d8jr" event={"ID":"1b3f9699-0154-45bb-a444-85cc44faac88","Type":"ContainerStarted","Data":"8c346070f73905dfd51ca196e2109b368a583c40ab3bf9e53c19ecd1169728e6"} Jan 29 16:28:47 crc kubenswrapper[4895]: I0129 16:28:47.634707 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=19.032450914 podStartE2EDuration="38.634682716s" podCreationTimestamp="2026-01-29 16:28:09 +0000 UTC" firstStartedPulling="2026-01-29 16:28:26.766860865 +0000 UTC m=+990.569838169" lastFinishedPulling="2026-01-29 16:28:46.369092707 +0000 UTC m=+1010.172069971" observedRunningTime="2026-01-29 16:28:47.632346214 +0000 UTC m=+1011.435323488" watchObservedRunningTime="2026-01-29 16:28:47.634682716 +0000 UTC m=+1011.437659980" Jan 29 16:28:47 crc kubenswrapper[4895]: I0129 16:28:47.659817 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7h6nr" podStartSLOduration=20.325175309 podStartE2EDuration="35.659790937s" podCreationTimestamp="2026-01-29 16:28:12 +0000 UTC" firstStartedPulling="2026-01-29 16:28:28.072216142 +0000 UTC m=+991.875193406" lastFinishedPulling="2026-01-29 16:28:43.40683177 +0000 UTC m=+1007.209809034" observedRunningTime="2026-01-29 16:28:47.656429576 +0000 UTC m=+1011.459406890" watchObservedRunningTime="2026-01-29 16:28:47.659790937 +0000 UTC m=+1011.462768221" Jan 29 16:28:48 crc kubenswrapper[4895]: I0129 16:28:48.620130 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"896fe284-9834-4c99-b82f-1f13cb4b3857","Type":"ContainerStarted","Data":"f712f1ffbf54f03bcf70efd38c148f84730e879b8065e74dd6b299472fda869e"} Jan 29 16:28:48 crc kubenswrapper[4895]: I0129 16:28:48.624626 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9d8jr" event={"ID":"1b3f9699-0154-45bb-a444-85cc44faac88","Type":"ContainerStarted","Data":"3c3f1b6b88cbfe5212a64074b8eed7b144f146a0b10ba8fda4302d1ee86080e0"} Jan 29 16:28:48 crc kubenswrapper[4895]: I0129 16:28:48.624925 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:48 crc kubenswrapper[4895]: I0129 16:28:48.632437 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1ffe9927-329d-4120-b676-a27782b60e94","Type":"ContainerStarted","Data":"afa37b61392f3a7c5445594198595caf997ee2497bfddb82cd7466ad66ef3d27"} Jan 29 16:28:48 crc kubenswrapper[4895]: I0129 16:28:48.648621 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.951282192 podStartE2EDuration="36.648587689s" podCreationTimestamp="2026-01-29 16:28:12 +0000 UTC" firstStartedPulling="2026-01-29 16:28:28.125015802 +0000 UTC m=+991.927993066" lastFinishedPulling="2026-01-29 16:28:47.822321289 +0000 UTC m=+1011.625298563" observedRunningTime="2026-01-29 16:28:48.642854584 +0000 UTC m=+1012.445831858" watchObservedRunningTime="2026-01-29 16:28:48.648587689 +0000 UTC m=+1012.451564963" Jan 29 16:28:48 crc kubenswrapper[4895]: I0129 16:28:48.672217 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9d8jr" podStartSLOduration=21.461652312 podStartE2EDuration="36.672185759s" podCreationTimestamp="2026-01-29 16:28:12 +0000 UTC" firstStartedPulling="2026-01-29 16:28:28.196110708 +0000 UTC m=+991.999087972" lastFinishedPulling="2026-01-29 16:28:43.406644165 +0000 UTC m=+1007.209621419" observedRunningTime="2026-01-29 16:28:48.66558577 +0000 UTC m=+1012.468563044" watchObservedRunningTime="2026-01-29 16:28:48.672185759 +0000 UTC m=+1012.475163033" Jan 29 16:28:48 crc kubenswrapper[4895]: I0129 16:28:48.697284 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.878989905 podStartE2EDuration="33.697259438s" podCreationTimestamp="2026-01-29 16:28:15 +0000 UTC" firstStartedPulling="2026-01-29 16:28:28.018518108 +0000 UTC m=+991.821495362" lastFinishedPulling="2026-01-29 16:28:47.836787631 +0000 UTC m=+1011.639764895" observedRunningTime="2026-01-29 16:28:48.68920916 +0000 UTC m=+1012.492186444" watchObservedRunningTime="2026-01-29 16:28:48.697259438 +0000 UTC m=+1012.500236702" Jan 29 16:28:49 crc kubenswrapper[4895]: I0129 16:28:49.121441 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:49 crc kubenswrapper[4895]: I0129 16:28:49.659404 4895 generic.go:334] "Generic (PLEG): container finished" podID="ef5d7b98-96fe-49e9-ba5b-f662a93ce514" containerID="8c48543cc9be1fbbe0591ba4f469908a6d2399257045afcbdb69471220f0f1ed" exitCode=0 Jan 29 16:28:49 crc kubenswrapper[4895]: I0129 16:28:49.659491 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ef5d7b98-96fe-49e9-ba5b-f662a93ce514","Type":"ContainerDied","Data":"8c48543cc9be1fbbe0591ba4f469908a6d2399257045afcbdb69471220f0f1ed"} Jan 29 16:28:49 crc kubenswrapper[4895]: I0129 16:28:49.660653 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.121788 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.179355 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.202456 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.244045 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.673714 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ef5d7b98-96fe-49e9-ba5b-f662a93ce514","Type":"ContainerStarted","Data":"eec3d5157ee479ee6fd100ec8ecd39524d8136ef6b268f507f111b8ef3d90bf9"} Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.674336 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.713761 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.714199 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.130273979 podStartE2EDuration="46.714172349s" podCreationTimestamp="2026-01-29 16:28:04 +0000 UTC" firstStartedPulling="2026-01-29 16:28:24.033260251 +0000 UTC m=+987.836237515" lastFinishedPulling="2026-01-29 16:28:42.617158581 +0000 UTC m=+1006.420135885" observedRunningTime="2026-01-29 16:28:50.703560851 +0000 UTC m=+1014.506538135" watchObservedRunningTime="2026-01-29 16:28:50.714172349 +0000 UTC m=+1014.517149613" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.728506 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.937197 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-tbcb6"] Jan 29 16:28:50 crc kubenswrapper[4895]: E0129 16:28:50.937699 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89288e6c-c87d-4310-967b-803ebbf81eb8" containerName="dnsmasq-dns" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.937725 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="89288e6c-c87d-4310-967b-803ebbf81eb8" containerName="dnsmasq-dns" Jan 29 16:28:50 crc kubenswrapper[4895]: E0129 16:28:50.937768 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89288e6c-c87d-4310-967b-803ebbf81eb8" containerName="init" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.937779 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="89288e6c-c87d-4310-967b-803ebbf81eb8" containerName="init" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.937990 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="89288e6c-c87d-4310-967b-803ebbf81eb8" containerName="dnsmasq-dns" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.939252 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.946759 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.951765 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-tbcb6"] Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.976611 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-bfpw2"] Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.977688 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.985853 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.988738 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-config\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.988787 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xslbp\" (UniqueName: \"kubernetes.io/projected/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-kube-api-access-xslbp\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.988849 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:50 crc kubenswrapper[4895]: I0129 16:28:50.988931 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.022628 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bfpw2"] Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.090499 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkxhr\" (UniqueName: \"kubernetes.io/projected/b973acd5-f963-43d9-8797-dece98571fa9-kube-api-access-kkxhr\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.090568 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b973acd5-f963-43d9-8797-dece98571fa9-combined-ca-bundle\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.090607 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.090902 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b973acd5-f963-43d9-8797-dece98571fa9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.090997 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b973acd5-f963-43d9-8797-dece98571fa9-config\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.091033 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b973acd5-f963-43d9-8797-dece98571fa9-ovs-rundir\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.091159 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b973acd5-f963-43d9-8797-dece98571fa9-ovn-rundir\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.091204 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-config\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.091229 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xslbp\" (UniqueName: \"kubernetes.io/projected/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-kube-api-access-xslbp\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.091294 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.092233 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.092229 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.092569 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-config\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.105462 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-tbcb6"] Jan 29 16:28:51 crc kubenswrapper[4895]: E0129 16:28:51.106372 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-xslbp], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" podUID="da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.111978 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xslbp\" (UniqueName: \"kubernetes.io/projected/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-kube-api-access-xslbp\") pod \"dnsmasq-dns-5bf47b49b7-tbcb6\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.146762 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-n64hc"] Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.148071 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.152117 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.164928 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-n64hc"] Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.194020 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b973acd5-f963-43d9-8797-dece98571fa9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.194172 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.194262 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b973acd5-f963-43d9-8797-dece98571fa9-config\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.194340 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b973acd5-f963-43d9-8797-dece98571fa9-ovs-rundir\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.194456 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/47b279e9-73f3-444b-bf95-d97f9cc546ae-kube-api-access-zdxrl\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.194540 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b973acd5-f963-43d9-8797-dece98571fa9-ovn-rundir\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.194637 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-dns-svc\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.194776 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.194815 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-config\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.194903 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkxhr\" (UniqueName: \"kubernetes.io/projected/b973acd5-f963-43d9-8797-dece98571fa9-kube-api-access-kkxhr\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.194931 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b973acd5-f963-43d9-8797-dece98571fa9-combined-ca-bundle\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.199272 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b973acd5-f963-43d9-8797-dece98571fa9-ovn-rundir\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.200046 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b973acd5-f963-43d9-8797-dece98571fa9-config\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.200102 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b973acd5-f963-43d9-8797-dece98571fa9-ovs-rundir\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.200548 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b973acd5-f963-43d9-8797-dece98571fa9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.204719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b973acd5-f963-43d9-8797-dece98571fa9-combined-ca-bundle\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.216633 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.217953 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.221763 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.221822 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-sxnmf" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.222046 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.222050 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.237140 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.240750 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkxhr\" (UniqueName: \"kubernetes.io/projected/b973acd5-f963-43d9-8797-dece98571fa9-kube-api-access-kkxhr\") pod \"ovn-controller-metrics-bfpw2\" (UID: \"b973acd5-f963-43d9-8797-dece98571fa9\") " pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.297905 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bfpw2" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.298730 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.298996 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.299036 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-config\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.299107 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.299156 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.299191 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xznrz\" (UniqueName: \"kubernetes.io/projected/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-kube-api-access-xznrz\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.299247 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/47b279e9-73f3-444b-bf95-d97f9cc546ae-kube-api-access-zdxrl\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.299299 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.299323 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-config\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.299346 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.299387 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-dns-svc\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.299423 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-scripts\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.300661 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.301613 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-config\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.302395 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.303626 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-dns-svc\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.324437 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/47b279e9-73f3-444b-bf95-d97f9cc546ae-kube-api-access-zdxrl\") pod \"dnsmasq-dns-8554648995-n64hc\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.401749 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.401814 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-config\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.401849 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.401911 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-scripts\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.401982 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.402059 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.402105 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xznrz\" (UniqueName: \"kubernetes.io/projected/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-kube-api-access-xznrz\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.402384 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.403214 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-scripts\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.404331 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-config\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.413091 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.415828 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.428579 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.432127 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xznrz\" (UniqueName: \"kubernetes.io/projected/1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1-kube-api-access-xznrz\") pod \"ovn-northd-0\" (UID: \"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1\") " pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.477318 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.590974 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.694207 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.715895 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.752909 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-n64hc"] Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.772329 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bfpw2"] Jan 29 16:28:51 crc kubenswrapper[4895]: W0129 16:28:51.780303 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb973acd5_f963_43d9_8797_dece98571fa9.slice/crio-40cd834d7645b3abd058ced1adcd6c3fd39027f4128e5bd250dede7f2b16fef3 WatchSource:0}: Error finding container 40cd834d7645b3abd058ced1adcd6c3fd39027f4128e5bd250dede7f2b16fef3: Status 404 returned error can't find the container with id 40cd834d7645b3abd058ced1adcd6c3fd39027f4128e5bd250dede7f2b16fef3 Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.814711 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xslbp\" (UniqueName: \"kubernetes.io/projected/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-kube-api-access-xslbp\") pod \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.815197 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-config\") pod \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.815354 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-ovsdbserver-nb\") pod \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.815575 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-dns-svc\") pod \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\" (UID: \"da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb\") " Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.817492 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-config" (OuterVolumeSpecName: "config") pod "da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb" (UID: "da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.818188 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb" (UID: "da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.819016 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb" (UID: "da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.827655 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-kube-api-access-xslbp" (OuterVolumeSpecName: "kube-api-access-xslbp") pod "da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb" (UID: "da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb"). InnerVolumeSpecName "kube-api-access-xslbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.918507 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xslbp\" (UniqueName: \"kubernetes.io/projected/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-kube-api-access-xslbp\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.918542 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.918554 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:51 crc kubenswrapper[4895]: I0129 16:28:51.918566 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.081719 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:28:52 crc kubenswrapper[4895]: E0129 16:28:52.209602 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47b279e9_73f3_444b_bf95_d97f9cc546ae.slice/crio-0a5d4da8fbbdd1f39d5fe27edfe64915003d001497146eb73f216861e8b9faa6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47b279e9_73f3_444b_bf95_d97f9cc546ae.slice/crio-conmon-0a5d4da8fbbdd1f39d5fe27edfe64915003d001497146eb73f216861e8b9faa6.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.564186 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.701854 4895 generic.go:334] "Generic (PLEG): container finished" podID="47b279e9-73f3-444b-bf95-d97f9cc546ae" containerID="0a5d4da8fbbdd1f39d5fe27edfe64915003d001497146eb73f216861e8b9faa6" exitCode=0 Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.701909 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-n64hc" event={"ID":"47b279e9-73f3-444b-bf95-d97f9cc546ae","Type":"ContainerDied","Data":"0a5d4da8fbbdd1f39d5fe27edfe64915003d001497146eb73f216861e8b9faa6"} Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.702039 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-n64hc" event={"ID":"47b279e9-73f3-444b-bf95-d97f9cc546ae","Type":"ContainerStarted","Data":"e7c0dd15645127743d1ae883e0e7720ae18bb5f6f778df9eeba204483cbd3d0f"} Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.703389 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bfpw2" event={"ID":"b973acd5-f963-43d9-8797-dece98571fa9","Type":"ContainerStarted","Data":"7f8071b7612c3c92acd8d6e1753118628f21299f667514f6c7103164eaa79d9e"} Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.703423 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bfpw2" event={"ID":"b973acd5-f963-43d9-8797-dece98571fa9","Type":"ContainerStarted","Data":"40cd834d7645b3abd058ced1adcd6c3fd39027f4128e5bd250dede7f2b16fef3"} Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.705134 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-tbcb6" Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.705153 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1","Type":"ContainerStarted","Data":"26b6ba00904c6112e9ecec8466ecc763cc3e74550b56ede6387286a07c910e88"} Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.774003 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-bfpw2" podStartSLOduration=2.773977301 podStartE2EDuration="2.773977301s" podCreationTimestamp="2026-01-29 16:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:28:52.773511279 +0000 UTC m=+1016.576488553" watchObservedRunningTime="2026-01-29 16:28:52.773977301 +0000 UTC m=+1016.576954565" Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.829985 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-tbcb6"] Jan 29 16:28:52 crc kubenswrapper[4895]: I0129 16:28:52.830047 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-tbcb6"] Jan 29 16:28:53 crc kubenswrapper[4895]: I0129 16:28:53.054935 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb" path="/var/lib/kubelet/pods/da2023fb-fe1a-4c5f-b96f-ee032c4fc3cb/volumes" Jan 29 16:28:53 crc kubenswrapper[4895]: I0129 16:28:53.718095 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41493083-077b-4518-a749-48a27e14b2a7","Type":"ContainerStarted","Data":"0ac8249e78b7f3f7e5b7c1d03b19e3a8f55e34a617cf298c835b63decd34cadb"} Jan 29 16:28:53 crc kubenswrapper[4895]: I0129 16:28:53.722171 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-n64hc" event={"ID":"47b279e9-73f3-444b-bf95-d97f9cc546ae","Type":"ContainerStarted","Data":"aa4f271dba09a2a5626c37ee9688bb9605e2bb4d346d2a7baeb8e10905204595"} Jan 29 16:28:53 crc kubenswrapper[4895]: I0129 16:28:53.782787 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-n64hc" podStartSLOduration=2.7827634850000003 podStartE2EDuration="2.782763485s" podCreationTimestamp="2026-01-29 16:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:28:53.777728919 +0000 UTC m=+1017.580706193" watchObservedRunningTime="2026-01-29 16:28:53.782763485 +0000 UTC m=+1017.585740749" Jan 29 16:28:54 crc kubenswrapper[4895]: I0129 16:28:54.731978 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1","Type":"ContainerStarted","Data":"c9f256c30207f1e3ef9758cc73678bbf8e486a939dfc6ba714856a3f6b5b854f"} Jan 29 16:28:54 crc kubenswrapper[4895]: I0129 16:28:54.732500 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1","Type":"ContainerStarted","Data":"c19f125f1f523d830ecdbddc5f259bc74ff4668eb9fcff7dd1fc9eba235adc81"} Jan 29 16:28:54 crc kubenswrapper[4895]: I0129 16:28:54.732522 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 16:28:54 crc kubenswrapper[4895]: I0129 16:28:54.732535 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:28:54 crc kubenswrapper[4895]: I0129 16:28:54.758251 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.9230286890000001 podStartE2EDuration="3.758229187s" podCreationTimestamp="2026-01-29 16:28:51 +0000 UTC" firstStartedPulling="2026-01-29 16:28:52.098451694 +0000 UTC m=+1015.901428958" lastFinishedPulling="2026-01-29 16:28:53.933652192 +0000 UTC m=+1017.736629456" observedRunningTime="2026-01-29 16:28:54.751485844 +0000 UTC m=+1018.554463098" watchObservedRunningTime="2026-01-29 16:28:54.758229187 +0000 UTC m=+1018.561206451" Jan 29 16:28:55 crc kubenswrapper[4895]: I0129 16:28:55.955937 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 16:28:55 crc kubenswrapper[4895]: I0129 16:28:55.956348 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 16:28:57 crc kubenswrapper[4895]: I0129 16:28:57.823755 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:28:57 crc kubenswrapper[4895]: I0129 16:28:57.824326 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:28:59 crc kubenswrapper[4895]: I0129 16:28:59.128896 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 16:28:59 crc kubenswrapper[4895]: I0129 16:28:59.221411 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="ef5d7b98-96fe-49e9-ba5b-f662a93ce514" containerName="galera" probeResult="failure" output=< Jan 29 16:28:59 crc kubenswrapper[4895]: wsrep_local_state_comment (Joined) differs from Synced Jan 29 16:28:59 crc kubenswrapper[4895]: > Jan 29 16:28:59 crc kubenswrapper[4895]: I0129 16:28:59.602911 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 16:28:59 crc kubenswrapper[4895]: I0129 16:28:59.781114 4895 generic.go:334] "Generic (PLEG): container finished" podID="41493083-077b-4518-a749-48a27e14b2a7" containerID="0ac8249e78b7f3f7e5b7c1d03b19e3a8f55e34a617cf298c835b63decd34cadb" exitCode=0 Jan 29 16:28:59 crc kubenswrapper[4895]: I0129 16:28:59.781194 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41493083-077b-4518-a749-48a27e14b2a7","Type":"ContainerDied","Data":"0ac8249e78b7f3f7e5b7c1d03b19e3a8f55e34a617cf298c835b63decd34cadb"} Jan 29 16:29:00 crc kubenswrapper[4895]: I0129 16:29:00.792717 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"41493083-077b-4518-a749-48a27e14b2a7","Type":"ContainerStarted","Data":"cf90e36df8dc938bd8bd5b5caef95852724476f94e9b054853d9d4b3fd3830a8"} Jan 29 16:29:00 crc kubenswrapper[4895]: I0129 16:29:00.818662 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371982.036146 podStartE2EDuration="54.818629551s" podCreationTimestamp="2026-01-29 16:28:06 +0000 UTC" firstStartedPulling="2026-01-29 16:28:20.852490576 +0000 UTC m=+984.655467840" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:00.81750834 +0000 UTC m=+1024.620485624" watchObservedRunningTime="2026-01-29 16:29:00.818629551 +0000 UTC m=+1024.621606815" Jan 29 16:29:01 crc kubenswrapper[4895]: I0129 16:29:01.479962 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:29:01 crc kubenswrapper[4895]: I0129 16:29:01.597936 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g2chz"] Jan 29 16:29:01 crc kubenswrapper[4895]: I0129 16:29:01.598361 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" podUID="1211a4fd-b5aa-41b9-8d9e-25437148b486" containerName="dnsmasq-dns" containerID="cri-o://16161f0ee5ab15262ec56eea45a2b8eb683cf442f83499cf963598c8e6509d39" gracePeriod=10 Jan 29 16:29:01 crc kubenswrapper[4895]: I0129 16:29:01.806442 4895 generic.go:334] "Generic (PLEG): container finished" podID="1211a4fd-b5aa-41b9-8d9e-25437148b486" containerID="16161f0ee5ab15262ec56eea45a2b8eb683cf442f83499cf963598c8e6509d39" exitCode=0 Jan 29 16:29:01 crc kubenswrapper[4895]: I0129 16:29:01.806520 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" event={"ID":"1211a4fd-b5aa-41b9-8d9e-25437148b486","Type":"ContainerDied","Data":"16161f0ee5ab15262ec56eea45a2b8eb683cf442f83499cf963598c8e6509d39"} Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.664193 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.759587 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qp2s\" (UniqueName: \"kubernetes.io/projected/1211a4fd-b5aa-41b9-8d9e-25437148b486-kube-api-access-6qp2s\") pod \"1211a4fd-b5aa-41b9-8d9e-25437148b486\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.759765 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-dns-svc\") pod \"1211a4fd-b5aa-41b9-8d9e-25437148b486\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.759891 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-config\") pod \"1211a4fd-b5aa-41b9-8d9e-25437148b486\" (UID: \"1211a4fd-b5aa-41b9-8d9e-25437148b486\") " Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.766375 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1211a4fd-b5aa-41b9-8d9e-25437148b486-kube-api-access-6qp2s" (OuterVolumeSpecName: "kube-api-access-6qp2s") pod "1211a4fd-b5aa-41b9-8d9e-25437148b486" (UID: "1211a4fd-b5aa-41b9-8d9e-25437148b486"). InnerVolumeSpecName "kube-api-access-6qp2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.805849 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-config" (OuterVolumeSpecName: "config") pod "1211a4fd-b5aa-41b9-8d9e-25437148b486" (UID: "1211a4fd-b5aa-41b9-8d9e-25437148b486"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.818164 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1211a4fd-b5aa-41b9-8d9e-25437148b486" (UID: "1211a4fd-b5aa-41b9-8d9e-25437148b486"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.822250 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" event={"ID":"1211a4fd-b5aa-41b9-8d9e-25437148b486","Type":"ContainerDied","Data":"3b74b32986c6ac99e116ea2e40b2d9300f7f85cb61b3277ba103dad5eb8d10a6"} Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.822339 4895 scope.go:117] "RemoveContainer" containerID="16161f0ee5ab15262ec56eea45a2b8eb683cf442f83499cf963598c8e6509d39" Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.822585 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-g2chz" Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.863077 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.863118 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1211a4fd-b5aa-41b9-8d9e-25437148b486-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.863132 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qp2s\" (UniqueName: \"kubernetes.io/projected/1211a4fd-b5aa-41b9-8d9e-25437148b486-kube-api-access-6qp2s\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.894946 4895 scope.go:117] "RemoveContainer" containerID="5e06a4da9f2e4ee9f85d16205907044360814158045b3a5ba709c5f854418636" Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.908709 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g2chz"] Jan 29 16:29:02 crc kubenswrapper[4895]: I0129 16:29:02.915788 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-g2chz"] Jan 29 16:29:03 crc kubenswrapper[4895]: I0129 16:29:03.049047 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1211a4fd-b5aa-41b9-8d9e-25437148b486" path="/var/lib/kubelet/pods/1211a4fd-b5aa-41b9-8d9e-25437148b486/volumes" Jan 29 16:29:06 crc kubenswrapper[4895]: I0129 16:29:06.038439 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.336675 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-g8ww6"] Jan 29 16:29:07 crc kubenswrapper[4895]: E0129 16:29:07.337293 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1211a4fd-b5aa-41b9-8d9e-25437148b486" containerName="dnsmasq-dns" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.337314 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1211a4fd-b5aa-41b9-8d9e-25437148b486" containerName="dnsmasq-dns" Jan 29 16:29:07 crc kubenswrapper[4895]: E0129 16:29:07.337337 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1211a4fd-b5aa-41b9-8d9e-25437148b486" containerName="init" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.337345 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1211a4fd-b5aa-41b9-8d9e-25437148b486" containerName="init" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.337537 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1211a4fd-b5aa-41b9-8d9e-25437148b486" containerName="dnsmasq-dns" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.338410 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g8ww6" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.347793 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0483-account-create-update-tfx5k"] Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.353305 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0483-account-create-update-tfx5k" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.358407 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.361033 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g8ww6"] Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.378963 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0483-account-create-update-tfx5k"] Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.436055 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.436110 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.453618 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nnhm\" (UniqueName: \"kubernetes.io/projected/f35fe053-f782-4049-bb33-0dc45a1a07aa-kube-api-access-2nnhm\") pod \"keystone-0483-account-create-update-tfx5k\" (UID: \"f35fe053-f782-4049-bb33-0dc45a1a07aa\") " pod="openstack/keystone-0483-account-create-update-tfx5k" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.453733 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f35fe053-f782-4049-bb33-0dc45a1a07aa-operator-scripts\") pod \"keystone-0483-account-create-update-tfx5k\" (UID: \"f35fe053-f782-4049-bb33-0dc45a1a07aa\") " pod="openstack/keystone-0483-account-create-update-tfx5k" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.453768 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-operator-scripts\") pod \"keystone-db-create-g8ww6\" (UID: \"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e\") " pod="openstack/keystone-db-create-g8ww6" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.453804 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fzh2\" (UniqueName: \"kubernetes.io/projected/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-kube-api-access-9fzh2\") pod \"keystone-db-create-g8ww6\" (UID: \"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e\") " pod="openstack/keystone-db-create-g8ww6" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.533953 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zzlg4"] Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.535682 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zzlg4" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.549875 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zzlg4"] Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.556135 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nnhm\" (UniqueName: \"kubernetes.io/projected/f35fe053-f782-4049-bb33-0dc45a1a07aa-kube-api-access-2nnhm\") pod \"keystone-0483-account-create-update-tfx5k\" (UID: \"f35fe053-f782-4049-bb33-0dc45a1a07aa\") " pod="openstack/keystone-0483-account-create-update-tfx5k" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.556619 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f35fe053-f782-4049-bb33-0dc45a1a07aa-operator-scripts\") pod \"keystone-0483-account-create-update-tfx5k\" (UID: \"f35fe053-f782-4049-bb33-0dc45a1a07aa\") " pod="openstack/keystone-0483-account-create-update-tfx5k" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.558569 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-operator-scripts\") pod \"keystone-db-create-g8ww6\" (UID: \"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e\") " pod="openstack/keystone-db-create-g8ww6" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.558642 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fzh2\" (UniqueName: \"kubernetes.io/projected/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-kube-api-access-9fzh2\") pod \"keystone-db-create-g8ww6\" (UID: \"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e\") " pod="openstack/keystone-db-create-g8ww6" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.570857 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f35fe053-f782-4049-bb33-0dc45a1a07aa-operator-scripts\") pod \"keystone-0483-account-create-update-tfx5k\" (UID: \"f35fe053-f782-4049-bb33-0dc45a1a07aa\") " pod="openstack/keystone-0483-account-create-update-tfx5k" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.576380 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-operator-scripts\") pod \"keystone-db-create-g8ww6\" (UID: \"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e\") " pod="openstack/keystone-db-create-g8ww6" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.621992 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fzh2\" (UniqueName: \"kubernetes.io/projected/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-kube-api-access-9fzh2\") pod \"keystone-db-create-g8ww6\" (UID: \"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e\") " pod="openstack/keystone-db-create-g8ww6" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.624081 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nnhm\" (UniqueName: \"kubernetes.io/projected/f35fe053-f782-4049-bb33-0dc45a1a07aa-kube-api-access-2nnhm\") pod \"keystone-0483-account-create-update-tfx5k\" (UID: \"f35fe053-f782-4049-bb33-0dc45a1a07aa\") " pod="openstack/keystone-0483-account-create-update-tfx5k" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.660308 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-140c-account-create-update-5cftn"] Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.660449 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2198769e-7dd4-4dbb-8048-93e60289c898-operator-scripts\") pod \"placement-db-create-zzlg4\" (UID: \"2198769e-7dd4-4dbb-8048-93e60289c898\") " pod="openstack/placement-db-create-zzlg4" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.660560 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwsn5\" (UniqueName: \"kubernetes.io/projected/2198769e-7dd4-4dbb-8048-93e60289c898-kube-api-access-jwsn5\") pod \"placement-db-create-zzlg4\" (UID: \"2198769e-7dd4-4dbb-8048-93e60289c898\") " pod="openstack/placement-db-create-zzlg4" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.661947 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-140c-account-create-update-5cftn" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.666074 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.670814 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g8ww6" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.675843 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-140c-account-create-update-5cftn"] Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.682155 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0483-account-create-update-tfx5k" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.764659 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwsn5\" (UniqueName: \"kubernetes.io/projected/2198769e-7dd4-4dbb-8048-93e60289c898-kube-api-access-jwsn5\") pod \"placement-db-create-zzlg4\" (UID: \"2198769e-7dd4-4dbb-8048-93e60289c898\") " pod="openstack/placement-db-create-zzlg4" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.764770 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39c1184b-9bdb-49aa-9cdb-934a29d9875c-operator-scripts\") pod \"placement-140c-account-create-update-5cftn\" (UID: \"39c1184b-9bdb-49aa-9cdb-934a29d9875c\") " pod="openstack/placement-140c-account-create-update-5cftn" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.764827 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cznmc\" (UniqueName: \"kubernetes.io/projected/39c1184b-9bdb-49aa-9cdb-934a29d9875c-kube-api-access-cznmc\") pod \"placement-140c-account-create-update-5cftn\" (UID: \"39c1184b-9bdb-49aa-9cdb-934a29d9875c\") " pod="openstack/placement-140c-account-create-update-5cftn" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.764858 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2198769e-7dd4-4dbb-8048-93e60289c898-operator-scripts\") pod \"placement-db-create-zzlg4\" (UID: \"2198769e-7dd4-4dbb-8048-93e60289c898\") " pod="openstack/placement-db-create-zzlg4" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.765971 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2198769e-7dd4-4dbb-8048-93e60289c898-operator-scripts\") pod \"placement-db-create-zzlg4\" (UID: \"2198769e-7dd4-4dbb-8048-93e60289c898\") " pod="openstack/placement-db-create-zzlg4" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.815714 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwsn5\" (UniqueName: \"kubernetes.io/projected/2198769e-7dd4-4dbb-8048-93e60289c898-kube-api-access-jwsn5\") pod \"placement-db-create-zzlg4\" (UID: \"2198769e-7dd4-4dbb-8048-93e60289c898\") " pod="openstack/placement-db-create-zzlg4" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.867718 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39c1184b-9bdb-49aa-9cdb-934a29d9875c-operator-scripts\") pod \"placement-140c-account-create-update-5cftn\" (UID: \"39c1184b-9bdb-49aa-9cdb-934a29d9875c\") " pod="openstack/placement-140c-account-create-update-5cftn" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.867810 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cznmc\" (UniqueName: \"kubernetes.io/projected/39c1184b-9bdb-49aa-9cdb-934a29d9875c-kube-api-access-cznmc\") pod \"placement-140c-account-create-update-5cftn\" (UID: \"39c1184b-9bdb-49aa-9cdb-934a29d9875c\") " pod="openstack/placement-140c-account-create-update-5cftn" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.872296 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39c1184b-9bdb-49aa-9cdb-934a29d9875c-operator-scripts\") pod \"placement-140c-account-create-update-5cftn\" (UID: \"39c1184b-9bdb-49aa-9cdb-934a29d9875c\") " pod="openstack/placement-140c-account-create-update-5cftn" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.897950 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zzlg4" Jan 29 16:29:07 crc kubenswrapper[4895]: I0129 16:29:07.899312 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cznmc\" (UniqueName: \"kubernetes.io/projected/39c1184b-9bdb-49aa-9cdb-934a29d9875c-kube-api-access-cznmc\") pod \"placement-140c-account-create-update-5cftn\" (UID: \"39c1184b-9bdb-49aa-9cdb-934a29d9875c\") " pod="openstack/placement-140c-account-create-update-5cftn" Jan 29 16:29:08 crc kubenswrapper[4895]: I0129 16:29:08.086080 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-140c-account-create-update-5cftn" Jan 29 16:29:08 crc kubenswrapper[4895]: I0129 16:29:08.168675 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0483-account-create-update-tfx5k"] Jan 29 16:29:08 crc kubenswrapper[4895]: W0129 16:29:08.177446 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf35fe053_f782_4049_bb33_0dc45a1a07aa.slice/crio-7001017848a941f1e9e1c4b6cfd8f349d16b3660270201a1d5b062ea0e2476a1 WatchSource:0}: Error finding container 7001017848a941f1e9e1c4b6cfd8f349d16b3660270201a1d5b062ea0e2476a1: Status 404 returned error can't find the container with id 7001017848a941f1e9e1c4b6cfd8f349d16b3660270201a1d5b062ea0e2476a1 Jan 29 16:29:08 crc kubenswrapper[4895]: I0129 16:29:08.276924 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g8ww6"] Jan 29 16:29:08 crc kubenswrapper[4895]: I0129 16:29:08.348911 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zzlg4"] Jan 29 16:29:08 crc kubenswrapper[4895]: I0129 16:29:08.649511 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-140c-account-create-update-5cftn"] Jan 29 16:29:08 crc kubenswrapper[4895]: W0129 16:29:08.658190 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39c1184b_9bdb_49aa_9cdb_934a29d9875c.slice/crio-44e61703611cac23c40613217d275523fa58f41e8058c500e375de0bee7d69a9 WatchSource:0}: Error finding container 44e61703611cac23c40613217d275523fa58f41e8058c500e375de0bee7d69a9: Status 404 returned error can't find the container with id 44e61703611cac23c40613217d275523fa58f41e8058c500e375de0bee7d69a9 Jan 29 16:29:08 crc kubenswrapper[4895]: I0129 16:29:08.930639 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zzlg4" event={"ID":"2198769e-7dd4-4dbb-8048-93e60289c898","Type":"ContainerStarted","Data":"f48192a204a2a4fa66fbab22479f8ab209f1fa40517aad01b08ec4a81087524f"} Jan 29 16:29:08 crc kubenswrapper[4895]: I0129 16:29:08.932840 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g8ww6" event={"ID":"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e","Type":"ContainerStarted","Data":"8bcc12deeb78b90e5ebf4d24306e72cc9e9ad1ff9529a65ef2a15ef4696bec16"} Jan 29 16:29:08 crc kubenswrapper[4895]: I0129 16:29:08.934274 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-140c-account-create-update-5cftn" event={"ID":"39c1184b-9bdb-49aa-9cdb-934a29d9875c","Type":"ContainerStarted","Data":"44e61703611cac23c40613217d275523fa58f41e8058c500e375de0bee7d69a9"} Jan 29 16:29:08 crc kubenswrapper[4895]: I0129 16:29:08.937133 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0483-account-create-update-tfx5k" event={"ID":"f35fe053-f782-4049-bb33-0dc45a1a07aa","Type":"ContainerStarted","Data":"7001017848a941f1e9e1c4b6cfd8f349d16b3660270201a1d5b062ea0e2476a1"} Jan 29 16:29:09 crc kubenswrapper[4895]: I0129 16:29:09.948442 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0483-account-create-update-tfx5k" event={"ID":"f35fe053-f782-4049-bb33-0dc45a1a07aa","Type":"ContainerStarted","Data":"02d80f8ae3938ea0b74c7adbf5023039a023eaa4e6b87b1b50117111b5ed61d9"} Jan 29 16:29:09 crc kubenswrapper[4895]: I0129 16:29:09.951312 4895 generic.go:334] "Generic (PLEG): container finished" podID="2198769e-7dd4-4dbb-8048-93e60289c898" containerID="a70b605869b9af1074089e250447357707e74034b3c91ebb37f1930f827bb3bb" exitCode=0 Jan 29 16:29:09 crc kubenswrapper[4895]: I0129 16:29:09.951354 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zzlg4" event={"ID":"2198769e-7dd4-4dbb-8048-93e60289c898","Type":"ContainerDied","Data":"a70b605869b9af1074089e250447357707e74034b3c91ebb37f1930f827bb3bb"} Jan 29 16:29:09 crc kubenswrapper[4895]: I0129 16:29:09.953029 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g8ww6" event={"ID":"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e","Type":"ContainerStarted","Data":"4ada251d049c70aba74db1a7c382aeb4212e0f56bcf02401ffd588fc110c5f64"} Jan 29 16:29:09 crc kubenswrapper[4895]: I0129 16:29:09.955938 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-140c-account-create-update-5cftn" event={"ID":"39c1184b-9bdb-49aa-9cdb-934a29d9875c","Type":"ContainerStarted","Data":"30e18be2da1743fb171f3dd20e7563947d2ce3b53278ba7fe5d84cae0464ff2c"} Jan 29 16:29:09 crc kubenswrapper[4895]: I0129 16:29:09.975211 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-0483-account-create-update-tfx5k" podStartSLOduration=2.975185088 podStartE2EDuration="2.975185088s" podCreationTimestamp="2026-01-29 16:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:09.966156164 +0000 UTC m=+1033.769133438" watchObservedRunningTime="2026-01-29 16:29:09.975185088 +0000 UTC m=+1033.778162372" Jan 29 16:29:09 crc kubenswrapper[4895]: I0129 16:29:09.992215 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 16:29:09 crc kubenswrapper[4895]: I0129 16:29:09.992600 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-g8ww6" podStartSLOduration=2.992566779 podStartE2EDuration="2.992566779s" podCreationTimestamp="2026-01-29 16:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:09.983622787 +0000 UTC m=+1033.786600081" watchObservedRunningTime="2026-01-29 16:29:09.992566779 +0000 UTC m=+1033.795544043" Jan 29 16:29:10 crc kubenswrapper[4895]: I0129 16:29:10.040439 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-140c-account-create-update-5cftn" podStartSLOduration=3.040417525 podStartE2EDuration="3.040417525s" podCreationTimestamp="2026-01-29 16:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:10.019603492 +0000 UTC m=+1033.822580766" watchObservedRunningTime="2026-01-29 16:29:10.040417525 +0000 UTC m=+1033.843394789" Jan 29 16:29:10 crc kubenswrapper[4895]: I0129 16:29:10.092541 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="41493083-077b-4518-a749-48a27e14b2a7" containerName="galera" probeResult="failure" output=< Jan 29 16:29:10 crc kubenswrapper[4895]: wsrep_local_state_comment (Joined) differs from Synced Jan 29 16:29:10 crc kubenswrapper[4895]: > Jan 29 16:29:10 crc kubenswrapper[4895]: I0129 16:29:10.969981 4895 generic.go:334] "Generic (PLEG): container finished" podID="1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e" containerID="4ada251d049c70aba74db1a7c382aeb4212e0f56bcf02401ffd588fc110c5f64" exitCode=0 Jan 29 16:29:10 crc kubenswrapper[4895]: I0129 16:29:10.970092 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g8ww6" event={"ID":"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e","Type":"ContainerDied","Data":"4ada251d049c70aba74db1a7c382aeb4212e0f56bcf02401ffd588fc110c5f64"} Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.437538 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zzlg4" Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.546668 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2198769e-7dd4-4dbb-8048-93e60289c898-operator-scripts\") pod \"2198769e-7dd4-4dbb-8048-93e60289c898\" (UID: \"2198769e-7dd4-4dbb-8048-93e60289c898\") " Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.546956 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwsn5\" (UniqueName: \"kubernetes.io/projected/2198769e-7dd4-4dbb-8048-93e60289c898-kube-api-access-jwsn5\") pod \"2198769e-7dd4-4dbb-8048-93e60289c898\" (UID: \"2198769e-7dd4-4dbb-8048-93e60289c898\") " Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.547830 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2198769e-7dd4-4dbb-8048-93e60289c898-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2198769e-7dd4-4dbb-8048-93e60289c898" (UID: "2198769e-7dd4-4dbb-8048-93e60289c898"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.555906 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2198769e-7dd4-4dbb-8048-93e60289c898-kube-api-access-jwsn5" (OuterVolumeSpecName: "kube-api-access-jwsn5") pod "2198769e-7dd4-4dbb-8048-93e60289c898" (UID: "2198769e-7dd4-4dbb-8048-93e60289c898"). InnerVolumeSpecName "kube-api-access-jwsn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.649525 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2198769e-7dd4-4dbb-8048-93e60289c898-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.649603 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwsn5\" (UniqueName: \"kubernetes.io/projected/2198769e-7dd4-4dbb-8048-93e60289c898-kube-api-access-jwsn5\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.678845 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.982340 4895 generic.go:334] "Generic (PLEG): container finished" podID="f35fe053-f782-4049-bb33-0dc45a1a07aa" containerID="02d80f8ae3938ea0b74c7adbf5023039a023eaa4e6b87b1b50117111b5ed61d9" exitCode=0 Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.982436 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0483-account-create-update-tfx5k" event={"ID":"f35fe053-f782-4049-bb33-0dc45a1a07aa","Type":"ContainerDied","Data":"02d80f8ae3938ea0b74c7adbf5023039a023eaa4e6b87b1b50117111b5ed61d9"} Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.985614 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zzlg4" Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.985657 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zzlg4" event={"ID":"2198769e-7dd4-4dbb-8048-93e60289c898","Type":"ContainerDied","Data":"f48192a204a2a4fa66fbab22479f8ab209f1fa40517aad01b08ec4a81087524f"} Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.985722 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f48192a204a2a4fa66fbab22479f8ab209f1fa40517aad01b08ec4a81087524f" Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.987905 4895 generic.go:334] "Generic (PLEG): container finished" podID="39c1184b-9bdb-49aa-9cdb-934a29d9875c" containerID="30e18be2da1743fb171f3dd20e7563947d2ce3b53278ba7fe5d84cae0464ff2c" exitCode=0 Jan 29 16:29:11 crc kubenswrapper[4895]: I0129 16:29:11.987954 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-140c-account-create-update-5cftn" event={"ID":"39c1184b-9bdb-49aa-9cdb-934a29d9875c","Type":"ContainerDied","Data":"30e18be2da1743fb171f3dd20e7563947d2ce3b53278ba7fe5d84cae0464ff2c"} Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.495985 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g8ww6" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.570665 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fzh2\" (UniqueName: \"kubernetes.io/projected/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-kube-api-access-9fzh2\") pod \"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e\" (UID: \"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e\") " Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.570917 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-operator-scripts\") pod \"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e\" (UID: \"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e\") " Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.572606 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e" (UID: "1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.589324 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-kube-api-access-9fzh2" (OuterVolumeSpecName: "kube-api-access-9fzh2") pod "1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e" (UID: "1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e"). InnerVolumeSpecName "kube-api-access-9fzh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.673167 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fzh2\" (UniqueName: \"kubernetes.io/projected/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-kube-api-access-9fzh2\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.673234 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.761056 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-wn525"] Jan 29 16:29:12 crc kubenswrapper[4895]: E0129 16:29:12.761523 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e" containerName="mariadb-database-create" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.761552 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e" containerName="mariadb-database-create" Jan 29 16:29:12 crc kubenswrapper[4895]: E0129 16:29:12.761588 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2198769e-7dd4-4dbb-8048-93e60289c898" containerName="mariadb-database-create" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.761598 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2198769e-7dd4-4dbb-8048-93e60289c898" containerName="mariadb-database-create" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.761801 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2198769e-7dd4-4dbb-8048-93e60289c898" containerName="mariadb-database-create" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.761828 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e" containerName="mariadb-database-create" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.762544 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wn525" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.770564 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wn525"] Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.860822 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0a76-account-create-update-tjmgh"] Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.862353 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0a76-account-create-update-tjmgh" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.864470 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.871374 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0a76-account-create-update-tjmgh"] Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.877144 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwh7k\" (UniqueName: \"kubernetes.io/projected/3d710b22-52da-4e44-8483-00d522bdf44e-kube-api-access-fwh7k\") pod \"glance-db-create-wn525\" (UID: \"3d710b22-52da-4e44-8483-00d522bdf44e\") " pod="openstack/glance-db-create-wn525" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.877381 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d710b22-52da-4e44-8483-00d522bdf44e-operator-scripts\") pod \"glance-db-create-wn525\" (UID: \"3d710b22-52da-4e44-8483-00d522bdf44e\") " pod="openstack/glance-db-create-wn525" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.979628 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr7tp\" (UniqueName: \"kubernetes.io/projected/19c50f54-9d03-4873-85e4-1958e9f81a90-kube-api-access-rr7tp\") pod \"glance-0a76-account-create-update-tjmgh\" (UID: \"19c50f54-9d03-4873-85e4-1958e9f81a90\") " pod="openstack/glance-0a76-account-create-update-tjmgh" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.979722 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d710b22-52da-4e44-8483-00d522bdf44e-operator-scripts\") pod \"glance-db-create-wn525\" (UID: \"3d710b22-52da-4e44-8483-00d522bdf44e\") " pod="openstack/glance-db-create-wn525" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.979848 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwh7k\" (UniqueName: \"kubernetes.io/projected/3d710b22-52da-4e44-8483-00d522bdf44e-kube-api-access-fwh7k\") pod \"glance-db-create-wn525\" (UID: \"3d710b22-52da-4e44-8483-00d522bdf44e\") " pod="openstack/glance-db-create-wn525" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.979933 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19c50f54-9d03-4873-85e4-1958e9f81a90-operator-scripts\") pod \"glance-0a76-account-create-update-tjmgh\" (UID: \"19c50f54-9d03-4873-85e4-1958e9f81a90\") " pod="openstack/glance-0a76-account-create-update-tjmgh" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.980849 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d710b22-52da-4e44-8483-00d522bdf44e-operator-scripts\") pod \"glance-db-create-wn525\" (UID: \"3d710b22-52da-4e44-8483-00d522bdf44e\") " pod="openstack/glance-db-create-wn525" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.998097 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwh7k\" (UniqueName: \"kubernetes.io/projected/3d710b22-52da-4e44-8483-00d522bdf44e-kube-api-access-fwh7k\") pod \"glance-db-create-wn525\" (UID: \"3d710b22-52da-4e44-8483-00d522bdf44e\") " pod="openstack/glance-db-create-wn525" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.998302 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g8ww6" Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.998296 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g8ww6" event={"ID":"1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e","Type":"ContainerDied","Data":"8bcc12deeb78b90e5ebf4d24306e72cc9e9ad1ff9529a65ef2a15ef4696bec16"} Jan 29 16:29:12 crc kubenswrapper[4895]: I0129 16:29:12.998372 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bcc12deeb78b90e5ebf4d24306e72cc9e9ad1ff9529a65ef2a15ef4696bec16" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.080914 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wn525" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.081742 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19c50f54-9d03-4873-85e4-1958e9f81a90-operator-scripts\") pod \"glance-0a76-account-create-update-tjmgh\" (UID: \"19c50f54-9d03-4873-85e4-1958e9f81a90\") " pod="openstack/glance-0a76-account-create-update-tjmgh" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.081940 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr7tp\" (UniqueName: \"kubernetes.io/projected/19c50f54-9d03-4873-85e4-1958e9f81a90-kube-api-access-rr7tp\") pod \"glance-0a76-account-create-update-tjmgh\" (UID: \"19c50f54-9d03-4873-85e4-1958e9f81a90\") " pod="openstack/glance-0a76-account-create-update-tjmgh" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.083393 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19c50f54-9d03-4873-85e4-1958e9f81a90-operator-scripts\") pod \"glance-0a76-account-create-update-tjmgh\" (UID: \"19c50f54-9d03-4873-85e4-1958e9f81a90\") " pod="openstack/glance-0a76-account-create-update-tjmgh" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.102266 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr7tp\" (UniqueName: \"kubernetes.io/projected/19c50f54-9d03-4873-85e4-1958e9f81a90-kube-api-access-rr7tp\") pod \"glance-0a76-account-create-update-tjmgh\" (UID: \"19c50f54-9d03-4873-85e4-1958e9f81a90\") " pod="openstack/glance-0a76-account-create-update-tjmgh" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.223004 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0a76-account-create-update-tjmgh" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.381224 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0483-account-create-update-tfx5k" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.488778 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nnhm\" (UniqueName: \"kubernetes.io/projected/f35fe053-f782-4049-bb33-0dc45a1a07aa-kube-api-access-2nnhm\") pod \"f35fe053-f782-4049-bb33-0dc45a1a07aa\" (UID: \"f35fe053-f782-4049-bb33-0dc45a1a07aa\") " Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.488875 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f35fe053-f782-4049-bb33-0dc45a1a07aa-operator-scripts\") pod \"f35fe053-f782-4049-bb33-0dc45a1a07aa\" (UID: \"f35fe053-f782-4049-bb33-0dc45a1a07aa\") " Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.490642 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f35fe053-f782-4049-bb33-0dc45a1a07aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f35fe053-f782-4049-bb33-0dc45a1a07aa" (UID: "f35fe053-f782-4049-bb33-0dc45a1a07aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.495036 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-140c-account-create-update-5cftn" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.498081 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35fe053-f782-4049-bb33-0dc45a1a07aa-kube-api-access-2nnhm" (OuterVolumeSpecName: "kube-api-access-2nnhm") pod "f35fe053-f782-4049-bb33-0dc45a1a07aa" (UID: "f35fe053-f782-4049-bb33-0dc45a1a07aa"). InnerVolumeSpecName "kube-api-access-2nnhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.592287 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cznmc\" (UniqueName: \"kubernetes.io/projected/39c1184b-9bdb-49aa-9cdb-934a29d9875c-kube-api-access-cznmc\") pod \"39c1184b-9bdb-49aa-9cdb-934a29d9875c\" (UID: \"39c1184b-9bdb-49aa-9cdb-934a29d9875c\") " Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.592524 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39c1184b-9bdb-49aa-9cdb-934a29d9875c-operator-scripts\") pod \"39c1184b-9bdb-49aa-9cdb-934a29d9875c\" (UID: \"39c1184b-9bdb-49aa-9cdb-934a29d9875c\") " Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.592928 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nnhm\" (UniqueName: \"kubernetes.io/projected/f35fe053-f782-4049-bb33-0dc45a1a07aa-kube-api-access-2nnhm\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.592950 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f35fe053-f782-4049-bb33-0dc45a1a07aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.593765 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c1184b-9bdb-49aa-9cdb-934a29d9875c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39c1184b-9bdb-49aa-9cdb-934a29d9875c" (UID: "39c1184b-9bdb-49aa-9cdb-934a29d9875c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.596624 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c1184b-9bdb-49aa-9cdb-934a29d9875c-kube-api-access-cznmc" (OuterVolumeSpecName: "kube-api-access-cznmc") pod "39c1184b-9bdb-49aa-9cdb-934a29d9875c" (UID: "39c1184b-9bdb-49aa-9cdb-934a29d9875c"). InnerVolumeSpecName "kube-api-access-cznmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.597142 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0a76-account-create-update-tjmgh"] Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.657712 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wn525"] Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.694983 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39c1184b-9bdb-49aa-9cdb-934a29d9875c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:13 crc kubenswrapper[4895]: I0129 16:29:13.695038 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cznmc\" (UniqueName: \"kubernetes.io/projected/39c1184b-9bdb-49aa-9cdb-934a29d9875c-kube-api-access-cznmc\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.008185 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0a76-account-create-update-tjmgh" event={"ID":"19c50f54-9d03-4873-85e4-1958e9f81a90","Type":"ContainerStarted","Data":"a90b15b4115d3440c279f9d2a6987b17627b1dbb787c25e62a2b57c7a781527e"} Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.009861 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-140c-account-create-update-5cftn" event={"ID":"39c1184b-9bdb-49aa-9cdb-934a29d9875c","Type":"ContainerDied","Data":"44e61703611cac23c40613217d275523fa58f41e8058c500e375de0bee7d69a9"} Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.009936 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44e61703611cac23c40613217d275523fa58f41e8058c500e375de0bee7d69a9" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.010012 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-140c-account-create-update-5cftn" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.011912 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wn525" event={"ID":"3d710b22-52da-4e44-8483-00d522bdf44e","Type":"ContainerStarted","Data":"7be784a80960ecca33d96f0d84159867ead3d1b2a928bc301c19bd6edb0cea7c"} Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.013795 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0483-account-create-update-tfx5k" event={"ID":"f35fe053-f782-4049-bb33-0dc45a1a07aa","Type":"ContainerDied","Data":"7001017848a941f1e9e1c4b6cfd8f349d16b3660270201a1d5b062ea0e2476a1"} Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.013837 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7001017848a941f1e9e1c4b6cfd8f349d16b3660270201a1d5b062ea0e2476a1" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.013929 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0483-account-create-update-tfx5k" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.591538 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7n9qc"] Jan 29 16:29:14 crc kubenswrapper[4895]: E0129 16:29:14.592083 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c1184b-9bdb-49aa-9cdb-934a29d9875c" containerName="mariadb-account-create-update" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.592108 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c1184b-9bdb-49aa-9cdb-934a29d9875c" containerName="mariadb-account-create-update" Jan 29 16:29:14 crc kubenswrapper[4895]: E0129 16:29:14.592141 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35fe053-f782-4049-bb33-0dc45a1a07aa" containerName="mariadb-account-create-update" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.592149 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35fe053-f782-4049-bb33-0dc45a1a07aa" containerName="mariadb-account-create-update" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.592378 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35fe053-f782-4049-bb33-0dc45a1a07aa" containerName="mariadb-account-create-update" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.592404 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c1184b-9bdb-49aa-9cdb-934a29d9875c" containerName="mariadb-account-create-update" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.593218 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7n9qc" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.596177 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.609957 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7n9qc"] Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.717158 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czfkm\" (UniqueName: \"kubernetes.io/projected/154c4350-d415-473d-b33d-f7da4c78b7a6-kube-api-access-czfkm\") pod \"root-account-create-update-7n9qc\" (UID: \"154c4350-d415-473d-b33d-f7da4c78b7a6\") " pod="openstack/root-account-create-update-7n9qc" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.717941 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/154c4350-d415-473d-b33d-f7da4c78b7a6-operator-scripts\") pod \"root-account-create-update-7n9qc\" (UID: \"154c4350-d415-473d-b33d-f7da4c78b7a6\") " pod="openstack/root-account-create-update-7n9qc" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.819938 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/154c4350-d415-473d-b33d-f7da4c78b7a6-operator-scripts\") pod \"root-account-create-update-7n9qc\" (UID: \"154c4350-d415-473d-b33d-f7da4c78b7a6\") " pod="openstack/root-account-create-update-7n9qc" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.820079 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkm\" (UniqueName: \"kubernetes.io/projected/154c4350-d415-473d-b33d-f7da4c78b7a6-kube-api-access-czfkm\") pod \"root-account-create-update-7n9qc\" (UID: \"154c4350-d415-473d-b33d-f7da4c78b7a6\") " pod="openstack/root-account-create-update-7n9qc" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.820976 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/154c4350-d415-473d-b33d-f7da4c78b7a6-operator-scripts\") pod \"root-account-create-update-7n9qc\" (UID: \"154c4350-d415-473d-b33d-f7da4c78b7a6\") " pod="openstack/root-account-create-update-7n9qc" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.850091 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czfkm\" (UniqueName: \"kubernetes.io/projected/154c4350-d415-473d-b33d-f7da4c78b7a6-kube-api-access-czfkm\") pod \"root-account-create-update-7n9qc\" (UID: \"154c4350-d415-473d-b33d-f7da4c78b7a6\") " pod="openstack/root-account-create-update-7n9qc" Jan 29 16:29:14 crc kubenswrapper[4895]: I0129 16:29:14.925465 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7n9qc" Jan 29 16:29:15 crc kubenswrapper[4895]: I0129 16:29:15.026725 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wn525" event={"ID":"3d710b22-52da-4e44-8483-00d522bdf44e","Type":"ContainerStarted","Data":"0cf4de0b939eb3da07e6f227feda677476d1a85387acde0c8d0de5d9308e3dee"} Jan 29 16:29:15 crc kubenswrapper[4895]: I0129 16:29:15.031311 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0a76-account-create-update-tjmgh" event={"ID":"19c50f54-9d03-4873-85e4-1958e9f81a90","Type":"ContainerStarted","Data":"801c43d969da3d1ac44900d3047685e2e30ec75a3944a0f30bac8d0cdcdc9869"} Jan 29 16:29:15 crc kubenswrapper[4895]: I0129 16:29:15.057206 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-wn525" podStartSLOduration=3.057175266 podStartE2EDuration="3.057175266s" podCreationTimestamp="2026-01-29 16:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:15.044314638 +0000 UTC m=+1038.847291922" watchObservedRunningTime="2026-01-29 16:29:15.057175266 +0000 UTC m=+1038.860152530" Jan 29 16:29:15 crc kubenswrapper[4895]: I0129 16:29:15.071789 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-0a76-account-create-update-tjmgh" podStartSLOduration=3.071756521 podStartE2EDuration="3.071756521s" podCreationTimestamp="2026-01-29 16:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:15.062385767 +0000 UTC m=+1038.865363041" watchObservedRunningTime="2026-01-29 16:29:15.071756521 +0000 UTC m=+1038.874733795" Jan 29 16:29:15 crc kubenswrapper[4895]: I0129 16:29:15.491740 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7n9qc"] Jan 29 16:29:16 crc kubenswrapper[4895]: I0129 16:29:16.040721 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7n9qc" event={"ID":"154c4350-d415-473d-b33d-f7da4c78b7a6","Type":"ContainerStarted","Data":"6f17eb14814b641336430b9c823010992532c8d8ec2a82555d0e7ba8aafc5d1d"} Jan 29 16:29:16 crc kubenswrapper[4895]: I0129 16:29:16.040789 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7n9qc" event={"ID":"154c4350-d415-473d-b33d-f7da4c78b7a6","Type":"ContainerStarted","Data":"2f12053614b42b956b63256412ad9dd130c1cb7d8afb8107f03b93861f82416a"} Jan 29 16:29:16 crc kubenswrapper[4895]: I0129 16:29:16.059442 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-7n9qc" podStartSLOduration=2.059406349 podStartE2EDuration="2.059406349s" podCreationTimestamp="2026-01-29 16:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:16.058337351 +0000 UTC m=+1039.861314615" watchObservedRunningTime="2026-01-29 16:29:16.059406349 +0000 UTC m=+1039.862383623" Jan 29 16:29:17 crc kubenswrapper[4895]: I0129 16:29:17.083345 4895 generic.go:334] "Generic (PLEG): container finished" podID="3d710b22-52da-4e44-8483-00d522bdf44e" containerID="0cf4de0b939eb3da07e6f227feda677476d1a85387acde0c8d0de5d9308e3dee" exitCode=0 Jan 29 16:29:17 crc kubenswrapper[4895]: I0129 16:29:17.083448 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wn525" event={"ID":"3d710b22-52da-4e44-8483-00d522bdf44e","Type":"ContainerDied","Data":"0cf4de0b939eb3da07e6f227feda677476d1a85387acde0c8d0de5d9308e3dee"} Jan 29 16:29:17 crc kubenswrapper[4895]: I0129 16:29:17.086122 4895 generic.go:334] "Generic (PLEG): container finished" podID="154c4350-d415-473d-b33d-f7da4c78b7a6" containerID="6f17eb14814b641336430b9c823010992532c8d8ec2a82555d0e7ba8aafc5d1d" exitCode=0 Jan 29 16:29:17 crc kubenswrapper[4895]: I0129 16:29:17.086357 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7n9qc" event={"ID":"154c4350-d415-473d-b33d-f7da4c78b7a6","Type":"ContainerDied","Data":"6f17eb14814b641336430b9c823010992532c8d8ec2a82555d0e7ba8aafc5d1d"} Jan 29 16:29:17 crc kubenswrapper[4895]: I0129 16:29:17.092981 4895 generic.go:334] "Generic (PLEG): container finished" podID="19c50f54-9d03-4873-85e4-1958e9f81a90" containerID="801c43d969da3d1ac44900d3047685e2e30ec75a3944a0f30bac8d0cdcdc9869" exitCode=0 Jan 29 16:29:17 crc kubenswrapper[4895]: I0129 16:29:17.093043 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0a76-account-create-update-tjmgh" event={"ID":"19c50f54-9d03-4873-85e4-1958e9f81a90","Type":"ContainerDied","Data":"801c43d969da3d1ac44900d3047685e2e30ec75a3944a0f30bac8d0cdcdc9869"} Jan 29 16:29:17 crc kubenswrapper[4895]: I0129 16:29:17.558088 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.106653 4895 generic.go:334] "Generic (PLEG): container finished" podID="f23fdbdb-0285-4d43-b9bd-923b372eaf42" containerID="703927b788d49dd2fbc7dcbeded873e6df74abef9151860b8f1949f49dd98c6a" exitCode=0 Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.106758 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f23fdbdb-0285-4d43-b9bd-923b372eaf42","Type":"ContainerDied","Data":"703927b788d49dd2fbc7dcbeded873e6df74abef9151860b8f1949f49dd98c6a"} Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.108996 4895 generic.go:334] "Generic (PLEG): container finished" podID="c3729063-b6e8-4de8-9ab9-7448a3ec325a" containerID="dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8" exitCode=0 Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.109283 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3729063-b6e8-4de8-9ab9-7448a3ec325a","Type":"ContainerDied","Data":"dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8"} Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.335281 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7h6nr" podUID="37ab7a53-0bcb-4f36-baa2-8d125d379bd3" containerName="ovn-controller" probeResult="failure" output=< Jan 29 16:29:18 crc kubenswrapper[4895]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 16:29:18 crc kubenswrapper[4895]: > Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.383006 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.407291 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9d8jr" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.493252 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7n9qc" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.598246 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czfkm\" (UniqueName: \"kubernetes.io/projected/154c4350-d415-473d-b33d-f7da4c78b7a6-kube-api-access-czfkm\") pod \"154c4350-d415-473d-b33d-f7da4c78b7a6\" (UID: \"154c4350-d415-473d-b33d-f7da4c78b7a6\") " Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.598428 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/154c4350-d415-473d-b33d-f7da4c78b7a6-operator-scripts\") pod \"154c4350-d415-473d-b33d-f7da4c78b7a6\" (UID: \"154c4350-d415-473d-b33d-f7da4c78b7a6\") " Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.599712 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/154c4350-d415-473d-b33d-f7da4c78b7a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "154c4350-d415-473d-b33d-f7da4c78b7a6" (UID: "154c4350-d415-473d-b33d-f7da4c78b7a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.605683 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/154c4350-d415-473d-b33d-f7da4c78b7a6-kube-api-access-czfkm" (OuterVolumeSpecName: "kube-api-access-czfkm") pod "154c4350-d415-473d-b33d-f7da4c78b7a6" (UID: "154c4350-d415-473d-b33d-f7da4c78b7a6"). InnerVolumeSpecName "kube-api-access-czfkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.700374 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czfkm\" (UniqueName: \"kubernetes.io/projected/154c4350-d415-473d-b33d-f7da4c78b7a6-kube-api-access-czfkm\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.700424 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/154c4350-d415-473d-b33d-f7da4c78b7a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.724677 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7h6nr-config-zvh48"] Jan 29 16:29:18 crc kubenswrapper[4895]: E0129 16:29:18.725068 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="154c4350-d415-473d-b33d-f7da4c78b7a6" containerName="mariadb-account-create-update" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.725085 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="154c4350-d415-473d-b33d-f7da4c78b7a6" containerName="mariadb-account-create-update" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.725800 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="154c4350-d415-473d-b33d-f7da4c78b7a6" containerName="mariadb-account-create-update" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.730291 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.733180 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0a76-account-create-update-tjmgh" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.733683 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.746297 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wn525" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.765022 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7h6nr-config-zvh48"] Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.802163 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d710b22-52da-4e44-8483-00d522bdf44e-operator-scripts\") pod \"3d710b22-52da-4e44-8483-00d522bdf44e\" (UID: \"3d710b22-52da-4e44-8483-00d522bdf44e\") " Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.802307 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr7tp\" (UniqueName: \"kubernetes.io/projected/19c50f54-9d03-4873-85e4-1958e9f81a90-kube-api-access-rr7tp\") pod \"19c50f54-9d03-4873-85e4-1958e9f81a90\" (UID: \"19c50f54-9d03-4873-85e4-1958e9f81a90\") " Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.802374 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwh7k\" (UniqueName: \"kubernetes.io/projected/3d710b22-52da-4e44-8483-00d522bdf44e-kube-api-access-fwh7k\") pod \"3d710b22-52da-4e44-8483-00d522bdf44e\" (UID: \"3d710b22-52da-4e44-8483-00d522bdf44e\") " Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.802400 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19c50f54-9d03-4873-85e4-1958e9f81a90-operator-scripts\") pod \"19c50f54-9d03-4873-85e4-1958e9f81a90\" (UID: \"19c50f54-9d03-4873-85e4-1958e9f81a90\") " Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.802661 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.802684 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-log-ovn\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.802746 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-scripts\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.802797 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-additional-scripts\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.802832 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2psf6\" (UniqueName: \"kubernetes.io/projected/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-kube-api-access-2psf6\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.802908 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run-ovn\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.803600 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d710b22-52da-4e44-8483-00d522bdf44e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d710b22-52da-4e44-8483-00d522bdf44e" (UID: "3d710b22-52da-4e44-8483-00d522bdf44e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.804068 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19c50f54-9d03-4873-85e4-1958e9f81a90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19c50f54-9d03-4873-85e4-1958e9f81a90" (UID: "19c50f54-9d03-4873-85e4-1958e9f81a90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.807162 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d710b22-52da-4e44-8483-00d522bdf44e-kube-api-access-fwh7k" (OuterVolumeSpecName: "kube-api-access-fwh7k") pod "3d710b22-52da-4e44-8483-00d522bdf44e" (UID: "3d710b22-52da-4e44-8483-00d522bdf44e"). InnerVolumeSpecName "kube-api-access-fwh7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.809186 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c50f54-9d03-4873-85e4-1958e9f81a90-kube-api-access-rr7tp" (OuterVolumeSpecName: "kube-api-access-rr7tp") pod "19c50f54-9d03-4873-85e4-1958e9f81a90" (UID: "19c50f54-9d03-4873-85e4-1958e9f81a90"). InnerVolumeSpecName "kube-api-access-rr7tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904475 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run-ovn\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904573 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904592 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-log-ovn\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904649 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-scripts\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904687 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-additional-scripts\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904719 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2psf6\" (UniqueName: \"kubernetes.io/projected/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-kube-api-access-2psf6\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904776 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d710b22-52da-4e44-8483-00d522bdf44e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904790 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr7tp\" (UniqueName: \"kubernetes.io/projected/19c50f54-9d03-4873-85e4-1958e9f81a90-kube-api-access-rr7tp\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904804 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwh7k\" (UniqueName: \"kubernetes.io/projected/3d710b22-52da-4e44-8483-00d522bdf44e-kube-api-access-fwh7k\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904813 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19c50f54-9d03-4873-85e4-1958e9f81a90-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run-ovn\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.904998 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.905071 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-log-ovn\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.905798 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-additional-scripts\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.907405 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-scripts\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:18 crc kubenswrapper[4895]: I0129 16:29:18.922153 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2psf6\" (UniqueName: \"kubernetes.io/projected/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-kube-api-access-2psf6\") pod \"ovn-controller-7h6nr-config-zvh48\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.053609 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.188582 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7n9qc" event={"ID":"154c4350-d415-473d-b33d-f7da4c78b7a6","Type":"ContainerDied","Data":"2f12053614b42b956b63256412ad9dd130c1cb7d8afb8107f03b93861f82416a"} Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.188651 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f12053614b42b956b63256412ad9dd130c1cb7d8afb8107f03b93861f82416a" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.188804 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7n9qc" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.237565 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0a76-account-create-update-tjmgh" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.237919 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0a76-account-create-update-tjmgh" event={"ID":"19c50f54-9d03-4873-85e4-1958e9f81a90","Type":"ContainerDied","Data":"a90b15b4115d3440c279f9d2a6987b17627b1dbb787c25e62a2b57c7a781527e"} Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.237981 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a90b15b4115d3440c279f9d2a6987b17627b1dbb787c25e62a2b57c7a781527e" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.265374 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f23fdbdb-0285-4d43-b9bd-923b372eaf42","Type":"ContainerStarted","Data":"5fbbb5604144066b13f3f294e6d850ee393d3449bc20fb862307a8db580c2194"} Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.266354 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.274697 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3729063-b6e8-4de8-9ab9-7448a3ec325a","Type":"ContainerStarted","Data":"0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82"} Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.275494 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.292276 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wn525" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.293018 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wn525" event={"ID":"3d710b22-52da-4e44-8483-00d522bdf44e","Type":"ContainerDied","Data":"7be784a80960ecca33d96f0d84159867ead3d1b2a928bc301c19bd6edb0cea7c"} Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.293045 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7be784a80960ecca33d96f0d84159867ead3d1b2a928bc301c19bd6edb0cea7c" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.341329 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.436094323 podStartE2EDuration="1m17.341303171s" podCreationTimestamp="2026-01-29 16:28:02 +0000 UTC" firstStartedPulling="2026-01-29 16:28:08.501477708 +0000 UTC m=+972.304454972" lastFinishedPulling="2026-01-29 16:28:43.406686556 +0000 UTC m=+1007.209663820" observedRunningTime="2026-01-29 16:29:19.334842655 +0000 UTC m=+1043.137819939" watchObservedRunningTime="2026-01-29 16:29:19.341303171 +0000 UTC m=+1043.144280435" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.445607 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=60.331818116 podStartE2EDuration="1m16.445583558s" podCreationTimestamp="2026-01-29 16:28:03 +0000 UTC" firstStartedPulling="2026-01-29 16:28:26.768073578 +0000 UTC m=+990.571050882" lastFinishedPulling="2026-01-29 16:28:42.88183906 +0000 UTC m=+1006.684816324" observedRunningTime="2026-01-29 16:29:19.433180551 +0000 UTC m=+1043.236157825" watchObservedRunningTime="2026-01-29 16:29:19.445583558 +0000 UTC m=+1043.248560822" Jan 29 16:29:19 crc kubenswrapper[4895]: I0129 16:29:19.716333 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7h6nr-config-zvh48"] Jan 29 16:29:20 crc kubenswrapper[4895]: I0129 16:29:20.301100 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7h6nr-config-zvh48" event={"ID":"2144f08d-8b5a-4d2a-827c-bf232cba2f3f","Type":"ContainerStarted","Data":"d2618ef23038f748a51d24ecfe055b59106c12e28e227596edbd3cd26bf36780"} Jan 29 16:29:21 crc kubenswrapper[4895]: I0129 16:29:21.311003 4895 generic.go:334] "Generic (PLEG): container finished" podID="2144f08d-8b5a-4d2a-827c-bf232cba2f3f" containerID="c78dfff09608dde8797049a8ecea1a112c629c4a8d9e43970b4e3654e7d807ca" exitCode=0 Jan 29 16:29:21 crc kubenswrapper[4895]: I0129 16:29:21.311123 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7h6nr-config-zvh48" event={"ID":"2144f08d-8b5a-4d2a-827c-bf232cba2f3f","Type":"ContainerDied","Data":"c78dfff09608dde8797049a8ecea1a112c629c4a8d9e43970b4e3654e7d807ca"} Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.659984 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.790858 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-log-ovn\") pod \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.790957 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run\") pod \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.791030 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-scripts\") pod \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.791059 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run-ovn\") pod \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.791069 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2144f08d-8b5a-4d2a-827c-bf232cba2f3f" (UID: "2144f08d-8b5a-4d2a-827c-bf232cba2f3f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.791087 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run" (OuterVolumeSpecName: "var-run") pod "2144f08d-8b5a-4d2a-827c-bf232cba2f3f" (UID: "2144f08d-8b5a-4d2a-827c-bf232cba2f3f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.791129 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-additional-scripts\") pod \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.791194 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2144f08d-8b5a-4d2a-827c-bf232cba2f3f" (UID: "2144f08d-8b5a-4d2a-827c-bf232cba2f3f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.791600 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2psf6\" (UniqueName: \"kubernetes.io/projected/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-kube-api-access-2psf6\") pod \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\" (UID: \"2144f08d-8b5a-4d2a-827c-bf232cba2f3f\") " Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.792015 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2144f08d-8b5a-4d2a-827c-bf232cba2f3f" (UID: "2144f08d-8b5a-4d2a-827c-bf232cba2f3f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.792586 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-scripts" (OuterVolumeSpecName: "scripts") pod "2144f08d-8b5a-4d2a-827c-bf232cba2f3f" (UID: "2144f08d-8b5a-4d2a-827c-bf232cba2f3f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.792957 4895 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.792985 4895 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.793000 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.793013 4895 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.793027 4895 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.822521 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-kube-api-access-2psf6" (OuterVolumeSpecName: "kube-api-access-2psf6") pod "2144f08d-8b5a-4d2a-827c-bf232cba2f3f" (UID: "2144f08d-8b5a-4d2a-827c-bf232cba2f3f"). InnerVolumeSpecName "kube-api-access-2psf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:22 crc kubenswrapper[4895]: I0129 16:29:22.894773 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2psf6\" (UniqueName: \"kubernetes.io/projected/2144f08d-8b5a-4d2a-827c-bf232cba2f3f-kube-api-access-2psf6\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.084292 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7pr4f"] Jan 29 16:29:23 crc kubenswrapper[4895]: E0129 16:29:23.084863 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c50f54-9d03-4873-85e4-1958e9f81a90" containerName="mariadb-account-create-update" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.084929 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c50f54-9d03-4873-85e4-1958e9f81a90" containerName="mariadb-account-create-update" Jan 29 16:29:23 crc kubenswrapper[4895]: E0129 16:29:23.084975 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d710b22-52da-4e44-8483-00d522bdf44e" containerName="mariadb-database-create" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.084990 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d710b22-52da-4e44-8483-00d522bdf44e" containerName="mariadb-database-create" Jan 29 16:29:23 crc kubenswrapper[4895]: E0129 16:29:23.085040 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2144f08d-8b5a-4d2a-827c-bf232cba2f3f" containerName="ovn-config" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.085051 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2144f08d-8b5a-4d2a-827c-bf232cba2f3f" containerName="ovn-config" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.085310 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d710b22-52da-4e44-8483-00d522bdf44e" containerName="mariadb-database-create" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.085329 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2144f08d-8b5a-4d2a-827c-bf232cba2f3f" containerName="ovn-config" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.085347 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c50f54-9d03-4873-85e4-1958e9f81a90" containerName="mariadb-account-create-update" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.086116 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.088648 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qb886" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.089172 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.100043 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7pr4f"] Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.202600 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-combined-ca-bundle\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.203364 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-db-sync-config-data\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.203461 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7xb\" (UniqueName: \"kubernetes.io/projected/cd10e751-7bed-464f-a755-a183b5ed4412-kube-api-access-5x7xb\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.203572 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-config-data\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.249813 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7h6nr" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.306051 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-db-sync-config-data\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.306129 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x7xb\" (UniqueName: \"kubernetes.io/projected/cd10e751-7bed-464f-a755-a183b5ed4412-kube-api-access-5x7xb\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.306169 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-config-data\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.306201 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-combined-ca-bundle\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.318031 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-combined-ca-bundle\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.323570 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-config-data\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.326909 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-db-sync-config-data\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.327677 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x7xb\" (UniqueName: \"kubernetes.io/projected/cd10e751-7bed-464f-a755-a183b5ed4412-kube-api-access-5x7xb\") pod \"glance-db-sync-7pr4f\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.338328 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7h6nr-config-zvh48" event={"ID":"2144f08d-8b5a-4d2a-827c-bf232cba2f3f","Type":"ContainerDied","Data":"d2618ef23038f748a51d24ecfe055b59106c12e28e227596edbd3cd26bf36780"} Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.338402 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2618ef23038f748a51d24ecfe055b59106c12e28e227596edbd3cd26bf36780" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.338489 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7h6nr-config-zvh48" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.403303 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7pr4f" Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.819116 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7h6nr-config-zvh48"] Jan 29 16:29:23 crc kubenswrapper[4895]: I0129 16:29:23.829915 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7h6nr-config-zvh48"] Jan 29 16:29:24 crc kubenswrapper[4895]: I0129 16:29:24.031345 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7pr4f"] Jan 29 16:29:24 crc kubenswrapper[4895]: W0129 16:29:24.038919 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd10e751_7bed_464f_a755_a183b5ed4412.slice/crio-ead18db35821502f68da962bd713e092aacf97d7c649e6555b0f84739298c800 WatchSource:0}: Error finding container ead18db35821502f68da962bd713e092aacf97d7c649e6555b0f84739298c800: Status 404 returned error can't find the container with id ead18db35821502f68da962bd713e092aacf97d7c649e6555b0f84739298c800 Jan 29 16:29:24 crc kubenswrapper[4895]: I0129 16:29:24.347136 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7pr4f" event={"ID":"cd10e751-7bed-464f-a755-a183b5ed4412","Type":"ContainerStarted","Data":"ead18db35821502f68da962bd713e092aacf97d7c649e6555b0f84739298c800"} Jan 29 16:29:25 crc kubenswrapper[4895]: I0129 16:29:25.051595 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2144f08d-8b5a-4d2a-827c-bf232cba2f3f" path="/var/lib/kubelet/pods/2144f08d-8b5a-4d2a-827c-bf232cba2f3f/volumes" Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.025310 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7n9qc"] Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.034437 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7n9qc"] Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.129749 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-49x8b"] Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.134975 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-49x8b" Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.150747 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.183064 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-49x8b"] Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.276787 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sczzs\" (UniqueName: \"kubernetes.io/projected/89f35430-2cba-4af3-bffb-fe817ccdb2e2-kube-api-access-sczzs\") pod \"root-account-create-update-49x8b\" (UID: \"89f35430-2cba-4af3-bffb-fe817ccdb2e2\") " pod="openstack/root-account-create-update-49x8b" Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.276913 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89f35430-2cba-4af3-bffb-fe817ccdb2e2-operator-scripts\") pod \"root-account-create-update-49x8b\" (UID: \"89f35430-2cba-4af3-bffb-fe817ccdb2e2\") " pod="openstack/root-account-create-update-49x8b" Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.379452 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89f35430-2cba-4af3-bffb-fe817ccdb2e2-operator-scripts\") pod \"root-account-create-update-49x8b\" (UID: \"89f35430-2cba-4af3-bffb-fe817ccdb2e2\") " pod="openstack/root-account-create-update-49x8b" Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.379699 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sczzs\" (UniqueName: \"kubernetes.io/projected/89f35430-2cba-4af3-bffb-fe817ccdb2e2-kube-api-access-sczzs\") pod \"root-account-create-update-49x8b\" (UID: \"89f35430-2cba-4af3-bffb-fe817ccdb2e2\") " pod="openstack/root-account-create-update-49x8b" Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.380846 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89f35430-2cba-4af3-bffb-fe817ccdb2e2-operator-scripts\") pod \"root-account-create-update-49x8b\" (UID: \"89f35430-2cba-4af3-bffb-fe817ccdb2e2\") " pod="openstack/root-account-create-update-49x8b" Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.405105 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sczzs\" (UniqueName: \"kubernetes.io/projected/89f35430-2cba-4af3-bffb-fe817ccdb2e2-kube-api-access-sczzs\") pod \"root-account-create-update-49x8b\" (UID: \"89f35430-2cba-4af3-bffb-fe817ccdb2e2\") " pod="openstack/root-account-create-update-49x8b" Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.489971 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-49x8b" Jan 29 16:29:26 crc kubenswrapper[4895]: I0129 16:29:26.947547 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-49x8b"] Jan 29 16:29:27 crc kubenswrapper[4895]: I0129 16:29:27.059611 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="154c4350-d415-473d-b33d-f7da4c78b7a6" path="/var/lib/kubelet/pods/154c4350-d415-473d-b33d-f7da4c78b7a6/volumes" Jan 29 16:29:27 crc kubenswrapper[4895]: I0129 16:29:27.381840 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-49x8b" event={"ID":"89f35430-2cba-4af3-bffb-fe817ccdb2e2","Type":"ContainerStarted","Data":"daef435e75b258e279d6f7199d5e890adbd4b7a620a073b109cb7e47e35237d1"} Jan 29 16:29:27 crc kubenswrapper[4895]: I0129 16:29:27.381920 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-49x8b" event={"ID":"89f35430-2cba-4af3-bffb-fe817ccdb2e2","Type":"ContainerStarted","Data":"5f0db00c2e612ac8e9fa0628e46b3bb50007e1e05084d1c1cf2a1ceee573bea7"} Jan 29 16:29:27 crc kubenswrapper[4895]: I0129 16:29:27.411108 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-49x8b" podStartSLOduration=1.411089334 podStartE2EDuration="1.411089334s" podCreationTimestamp="2026-01-29 16:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:27.407782904 +0000 UTC m=+1051.210760168" watchObservedRunningTime="2026-01-29 16:29:27.411089334 +0000 UTC m=+1051.214066598" Jan 29 16:29:27 crc kubenswrapper[4895]: I0129 16:29:27.822773 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:29:27 crc kubenswrapper[4895]: I0129 16:29:27.822861 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:29:27 crc kubenswrapper[4895]: I0129 16:29:27.822952 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:29:27 crc kubenswrapper[4895]: I0129 16:29:27.824041 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56eae442f108da9a8c7cd978ba66ad557a49280ec8ee87651bc60ede37bf78eb"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:29:27 crc kubenswrapper[4895]: I0129 16:29:27.824129 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://56eae442f108da9a8c7cd978ba66ad557a49280ec8ee87651bc60ede37bf78eb" gracePeriod=600 Jan 29 16:29:28 crc kubenswrapper[4895]: I0129 16:29:28.397088 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="56eae442f108da9a8c7cd978ba66ad557a49280ec8ee87651bc60ede37bf78eb" exitCode=0 Jan 29 16:29:28 crc kubenswrapper[4895]: I0129 16:29:28.397160 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"56eae442f108da9a8c7cd978ba66ad557a49280ec8ee87651bc60ede37bf78eb"} Jan 29 16:29:28 crc kubenswrapper[4895]: I0129 16:29:28.397233 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"b61d481b9d79815e2aa0a6766b442621a7f9d5212d6a5963946c3b9463e8ef1c"} Jan 29 16:29:28 crc kubenswrapper[4895]: I0129 16:29:28.397260 4895 scope.go:117] "RemoveContainer" containerID="38b01ff5ef7faf80c7f2424640fb866df9e6d62369651d4360c4c301990dfde0" Jan 29 16:29:28 crc kubenswrapper[4895]: I0129 16:29:28.401511 4895 generic.go:334] "Generic (PLEG): container finished" podID="89f35430-2cba-4af3-bffb-fe817ccdb2e2" containerID="daef435e75b258e279d6f7199d5e890adbd4b7a620a073b109cb7e47e35237d1" exitCode=0 Jan 29 16:29:28 crc kubenswrapper[4895]: I0129 16:29:28.401576 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-49x8b" event={"ID":"89f35430-2cba-4af3-bffb-fe817ccdb2e2","Type":"ContainerDied","Data":"daef435e75b258e279d6f7199d5e890adbd4b7a620a073b109cb7e47e35237d1"} Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.367226 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.690446 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7v9rk"] Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.691676 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7v9rk" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.711495 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7v9rk"] Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.783620 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-c4jml"] Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.785083 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.785278 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c4jml" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.798885 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-c4jml"] Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.805994 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd7zq\" (UniqueName: \"kubernetes.io/projected/eaad67d5-b9bb-42ae-befd-3b8765e8b760-kube-api-access-qd7zq\") pod \"cinder-db-create-7v9rk\" (UID: \"eaad67d5-b9bb-42ae-befd-3b8765e8b760\") " pod="openstack/cinder-db-create-7v9rk" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.806143 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaad67d5-b9bb-42ae-befd-3b8765e8b760-operator-scripts\") pod \"cinder-db-create-7v9rk\" (UID: \"eaad67d5-b9bb-42ae-befd-3b8765e8b760\") " pod="openstack/cinder-db-create-7v9rk" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.908298 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/188e6f67-6977-4709-bb3a-caf493bcc276-operator-scripts\") pod \"barbican-db-create-c4jml\" (UID: \"188e6f67-6977-4709-bb3a-caf493bcc276\") " pod="openstack/barbican-db-create-c4jml" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.908488 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaad67d5-b9bb-42ae-befd-3b8765e8b760-operator-scripts\") pod \"cinder-db-create-7v9rk\" (UID: \"eaad67d5-b9bb-42ae-befd-3b8765e8b760\") " pod="openstack/cinder-db-create-7v9rk" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.908528 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2bb7\" (UniqueName: \"kubernetes.io/projected/188e6f67-6977-4709-bb3a-caf493bcc276-kube-api-access-d2bb7\") pod \"barbican-db-create-c4jml\" (UID: \"188e6f67-6977-4709-bb3a-caf493bcc276\") " pod="openstack/barbican-db-create-c4jml" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.908597 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd7zq\" (UniqueName: \"kubernetes.io/projected/eaad67d5-b9bb-42ae-befd-3b8765e8b760-kube-api-access-qd7zq\") pod \"cinder-db-create-7v9rk\" (UID: \"eaad67d5-b9bb-42ae-befd-3b8765e8b760\") " pod="openstack/cinder-db-create-7v9rk" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.911285 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaad67d5-b9bb-42ae-befd-3b8765e8b760-operator-scripts\") pod \"cinder-db-create-7v9rk\" (UID: \"eaad67d5-b9bb-42ae-befd-3b8765e8b760\") " pod="openstack/cinder-db-create-7v9rk" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.915596 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d1f5-account-create-update-8vm9b"] Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.917326 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d1f5-account-create-update-8vm9b" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.927339 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.930250 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d1f5-account-create-update-8vm9b"] Jan 29 16:29:34 crc kubenswrapper[4895]: I0129 16:29:34.950637 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd7zq\" (UniqueName: \"kubernetes.io/projected/eaad67d5-b9bb-42ae-befd-3b8765e8b760-kube-api-access-qd7zq\") pod \"cinder-db-create-7v9rk\" (UID: \"eaad67d5-b9bb-42ae-befd-3b8765e8b760\") " pod="openstack/cinder-db-create-7v9rk" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.014930 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4923-account-create-update-rvxh4"] Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.015945 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4923-account-create-update-rvxh4" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.017790 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.022433 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34d38ae5-5cb9-47f3-88f0-818962fed6c1-operator-scripts\") pod \"cinder-d1f5-account-create-update-8vm9b\" (UID: \"34d38ae5-5cb9-47f3-88f0-818962fed6c1\") " pod="openstack/cinder-d1f5-account-create-update-8vm9b" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.022778 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7v9rk" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.022917 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2bb7\" (UniqueName: \"kubernetes.io/projected/188e6f67-6977-4709-bb3a-caf493bcc276-kube-api-access-d2bb7\") pod \"barbican-db-create-c4jml\" (UID: \"188e6f67-6977-4709-bb3a-caf493bcc276\") " pod="openstack/barbican-db-create-c4jml" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.023243 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c4d9\" (UniqueName: \"kubernetes.io/projected/34d38ae5-5cb9-47f3-88f0-818962fed6c1-kube-api-access-4c4d9\") pod \"cinder-d1f5-account-create-update-8vm9b\" (UID: \"34d38ae5-5cb9-47f3-88f0-818962fed6c1\") " pod="openstack/cinder-d1f5-account-create-update-8vm9b" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.023310 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/188e6f67-6977-4709-bb3a-caf493bcc276-operator-scripts\") pod \"barbican-db-create-c4jml\" (UID: \"188e6f67-6977-4709-bb3a-caf493bcc276\") " pod="openstack/barbican-db-create-c4jml" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.026024 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/188e6f67-6977-4709-bb3a-caf493bcc276-operator-scripts\") pod \"barbican-db-create-c4jml\" (UID: \"188e6f67-6977-4709-bb3a-caf493bcc276\") " pod="openstack/barbican-db-create-c4jml" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.080980 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2bb7\" (UniqueName: \"kubernetes.io/projected/188e6f67-6977-4709-bb3a-caf493bcc276-kube-api-access-d2bb7\") pod \"barbican-db-create-c4jml\" (UID: \"188e6f67-6977-4709-bb3a-caf493bcc276\") " pod="openstack/barbican-db-create-c4jml" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.086471 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4923-account-create-update-rvxh4"] Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.105942 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-2lh75"] Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.106064 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c4jml" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.107459 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.112032 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.112298 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.112522 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-m7trg" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.119529 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.124370 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2lh75"] Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.125424 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-operator-scripts\") pod \"barbican-4923-account-create-update-rvxh4\" (UID: \"fea59417-b9ef-43f6-b2f1-76d15c6dd51b\") " pod="openstack/barbican-4923-account-create-update-rvxh4" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.125542 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34d38ae5-5cb9-47f3-88f0-818962fed6c1-operator-scripts\") pod \"cinder-d1f5-account-create-update-8vm9b\" (UID: \"34d38ae5-5cb9-47f3-88f0-818962fed6c1\") " pod="openstack/cinder-d1f5-account-create-update-8vm9b" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.125648 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqpqh\" (UniqueName: \"kubernetes.io/projected/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-kube-api-access-lqpqh\") pod \"barbican-4923-account-create-update-rvxh4\" (UID: \"fea59417-b9ef-43f6-b2f1-76d15c6dd51b\") " pod="openstack/barbican-4923-account-create-update-rvxh4" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.125775 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c4d9\" (UniqueName: \"kubernetes.io/projected/34d38ae5-5cb9-47f3-88f0-818962fed6c1-kube-api-access-4c4d9\") pod \"cinder-d1f5-account-create-update-8vm9b\" (UID: \"34d38ae5-5cb9-47f3-88f0-818962fed6c1\") " pod="openstack/cinder-d1f5-account-create-update-8vm9b" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.126441 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34d38ae5-5cb9-47f3-88f0-818962fed6c1-operator-scripts\") pod \"cinder-d1f5-account-create-update-8vm9b\" (UID: \"34d38ae5-5cb9-47f3-88f0-818962fed6c1\") " pod="openstack/cinder-d1f5-account-create-update-8vm9b" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.168180 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c4d9\" (UniqueName: \"kubernetes.io/projected/34d38ae5-5cb9-47f3-88f0-818962fed6c1-kube-api-access-4c4d9\") pod \"cinder-d1f5-account-create-update-8vm9b\" (UID: \"34d38ae5-5cb9-47f3-88f0-818962fed6c1\") " pod="openstack/cinder-d1f5-account-create-update-8vm9b" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.202322 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-wcksm"] Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.203837 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wcksm" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.217808 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-baf0-account-create-update-m7shf"] Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.219223 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-baf0-account-create-update-m7shf" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.223653 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.228495 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wcksm"] Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.229779 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqpqh\" (UniqueName: \"kubernetes.io/projected/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-kube-api-access-lqpqh\") pod \"barbican-4923-account-create-update-rvxh4\" (UID: \"fea59417-b9ef-43f6-b2f1-76d15c6dd51b\") " pod="openstack/barbican-4923-account-create-update-rvxh4" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.229835 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gchrt\" (UniqueName: \"kubernetes.io/projected/1fa46325-18c8-48b6-bfe2-d5492f6a5998-kube-api-access-gchrt\") pod \"keystone-db-sync-2lh75\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.229906 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-combined-ca-bundle\") pod \"keystone-db-sync-2lh75\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.229947 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-operator-scripts\") pod \"barbican-4923-account-create-update-rvxh4\" (UID: \"fea59417-b9ef-43f6-b2f1-76d15c6dd51b\") " pod="openstack/barbican-4923-account-create-update-rvxh4" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.230004 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-config-data\") pod \"keystone-db-sync-2lh75\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.231347 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-operator-scripts\") pod \"barbican-4923-account-create-update-rvxh4\" (UID: \"fea59417-b9ef-43f6-b2f1-76d15c6dd51b\") " pod="openstack/barbican-4923-account-create-update-rvxh4" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.237721 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-baf0-account-create-update-m7shf"] Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.279798 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqpqh\" (UniqueName: \"kubernetes.io/projected/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-kube-api-access-lqpqh\") pod \"barbican-4923-account-create-update-rvxh4\" (UID: \"fea59417-b9ef-43f6-b2f1-76d15c6dd51b\") " pod="openstack/barbican-4923-account-create-update-rvxh4" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.293733 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d1f5-account-create-update-8vm9b" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.332688 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlfrc\" (UniqueName: \"kubernetes.io/projected/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-kube-api-access-xlfrc\") pod \"neutron-db-create-wcksm\" (UID: \"f2d83bb5-2939-44e6-a0c5-8bf4893aebda\") " pod="openstack/neutron-db-create-wcksm" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.332743 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtgzc\" (UniqueName: \"kubernetes.io/projected/85c63c14-1712-4979-b5f0-abf5a4d4b72a-kube-api-access-rtgzc\") pod \"neutron-baf0-account-create-update-m7shf\" (UID: \"85c63c14-1712-4979-b5f0-abf5a4d4b72a\") " pod="openstack/neutron-baf0-account-create-update-m7shf" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.332780 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-config-data\") pod \"keystone-db-sync-2lh75\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.332888 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-operator-scripts\") pod \"neutron-db-create-wcksm\" (UID: \"f2d83bb5-2939-44e6-a0c5-8bf4893aebda\") " pod="openstack/neutron-db-create-wcksm" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.332927 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchrt\" (UniqueName: \"kubernetes.io/projected/1fa46325-18c8-48b6-bfe2-d5492f6a5998-kube-api-access-gchrt\") pod \"keystone-db-sync-2lh75\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.333157 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-combined-ca-bundle\") pod \"keystone-db-sync-2lh75\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.333292 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c63c14-1712-4979-b5f0-abf5a4d4b72a-operator-scripts\") pod \"neutron-baf0-account-create-update-m7shf\" (UID: \"85c63c14-1712-4979-b5f0-abf5a4d4b72a\") " pod="openstack/neutron-baf0-account-create-update-m7shf" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.336425 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-combined-ca-bundle\") pod \"keystone-db-sync-2lh75\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.338804 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-config-data\") pod \"keystone-db-sync-2lh75\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.339174 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4923-account-create-update-rvxh4" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.352335 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gchrt\" (UniqueName: \"kubernetes.io/projected/1fa46325-18c8-48b6-bfe2-d5492f6a5998-kube-api-access-gchrt\") pod \"keystone-db-sync-2lh75\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.428810 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.436398 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlfrc\" (UniqueName: \"kubernetes.io/projected/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-kube-api-access-xlfrc\") pod \"neutron-db-create-wcksm\" (UID: \"f2d83bb5-2939-44e6-a0c5-8bf4893aebda\") " pod="openstack/neutron-db-create-wcksm" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.436469 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtgzc\" (UniqueName: \"kubernetes.io/projected/85c63c14-1712-4979-b5f0-abf5a4d4b72a-kube-api-access-rtgzc\") pod \"neutron-baf0-account-create-update-m7shf\" (UID: \"85c63c14-1712-4979-b5f0-abf5a4d4b72a\") " pod="openstack/neutron-baf0-account-create-update-m7shf" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.439262 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-operator-scripts\") pod \"neutron-db-create-wcksm\" (UID: \"f2d83bb5-2939-44e6-a0c5-8bf4893aebda\") " pod="openstack/neutron-db-create-wcksm" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.439433 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c63c14-1712-4979-b5f0-abf5a4d4b72a-operator-scripts\") pod \"neutron-baf0-account-create-update-m7shf\" (UID: \"85c63c14-1712-4979-b5f0-abf5a4d4b72a\") " pod="openstack/neutron-baf0-account-create-update-m7shf" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.440388 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-operator-scripts\") pod \"neutron-db-create-wcksm\" (UID: \"f2d83bb5-2939-44e6-a0c5-8bf4893aebda\") " pod="openstack/neutron-db-create-wcksm" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.440387 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c63c14-1712-4979-b5f0-abf5a4d4b72a-operator-scripts\") pod \"neutron-baf0-account-create-update-m7shf\" (UID: \"85c63c14-1712-4979-b5f0-abf5a4d4b72a\") " pod="openstack/neutron-baf0-account-create-update-m7shf" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.455398 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtgzc\" (UniqueName: \"kubernetes.io/projected/85c63c14-1712-4979-b5f0-abf5a4d4b72a-kube-api-access-rtgzc\") pod \"neutron-baf0-account-create-update-m7shf\" (UID: \"85c63c14-1712-4979-b5f0-abf5a4d4b72a\") " pod="openstack/neutron-baf0-account-create-update-m7shf" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.457683 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlfrc\" (UniqueName: \"kubernetes.io/projected/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-kube-api-access-xlfrc\") pod \"neutron-db-create-wcksm\" (UID: \"f2d83bb5-2939-44e6-a0c5-8bf4893aebda\") " pod="openstack/neutron-db-create-wcksm" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.533343 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wcksm" Jan 29 16:29:35 crc kubenswrapper[4895]: I0129 16:29:35.564272 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-baf0-account-create-update-m7shf" Jan 29 16:29:37 crc kubenswrapper[4895]: I0129 16:29:37.649977 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-49x8b" Jan 29 16:29:37 crc kubenswrapper[4895]: I0129 16:29:37.785542 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sczzs\" (UniqueName: \"kubernetes.io/projected/89f35430-2cba-4af3-bffb-fe817ccdb2e2-kube-api-access-sczzs\") pod \"89f35430-2cba-4af3-bffb-fe817ccdb2e2\" (UID: \"89f35430-2cba-4af3-bffb-fe817ccdb2e2\") " Jan 29 16:29:37 crc kubenswrapper[4895]: I0129 16:29:37.785650 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89f35430-2cba-4af3-bffb-fe817ccdb2e2-operator-scripts\") pod \"89f35430-2cba-4af3-bffb-fe817ccdb2e2\" (UID: \"89f35430-2cba-4af3-bffb-fe817ccdb2e2\") " Jan 29 16:29:37 crc kubenswrapper[4895]: I0129 16:29:37.786851 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f35430-2cba-4af3-bffb-fe817ccdb2e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89f35430-2cba-4af3-bffb-fe817ccdb2e2" (UID: "89f35430-2cba-4af3-bffb-fe817ccdb2e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:37 crc kubenswrapper[4895]: I0129 16:29:37.796200 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f35430-2cba-4af3-bffb-fe817ccdb2e2-kube-api-access-sczzs" (OuterVolumeSpecName: "kube-api-access-sczzs") pod "89f35430-2cba-4af3-bffb-fe817ccdb2e2" (UID: "89f35430-2cba-4af3-bffb-fe817ccdb2e2"). InnerVolumeSpecName "kube-api-access-sczzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:37 crc kubenswrapper[4895]: I0129 16:29:37.887355 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sczzs\" (UniqueName: \"kubernetes.io/projected/89f35430-2cba-4af3-bffb-fe817ccdb2e2-kube-api-access-sczzs\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:37 crc kubenswrapper[4895]: I0129 16:29:37.887773 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89f35430-2cba-4af3-bffb-fe817ccdb2e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.285688 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d1f5-account-create-update-8vm9b"] Jan 29 16:29:38 crc kubenswrapper[4895]: W0129 16:29:38.301759 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34d38ae5_5cb9_47f3_88f0_818962fed6c1.slice/crio-7ba3d2af54887f5691871aa659cf6a9229aa460be71d8edf287805be5ad4b12c WatchSource:0}: Error finding container 7ba3d2af54887f5691871aa659cf6a9229aa460be71d8edf287805be5ad4b12c: Status 404 returned error can't find the container with id 7ba3d2af54887f5691871aa659cf6a9229aa460be71d8edf287805be5ad4b12c Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.473482 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-c4jml"] Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.479769 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7v9rk"] Jan 29 16:29:38 crc kubenswrapper[4895]: W0129 16:29:38.480276 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaad67d5_b9bb_42ae_befd_3b8765e8b760.slice/crio-03d8ed5375cc62f702dff4241d823eef8146f996cbdd57965673ada1db1aa2d7 WatchSource:0}: Error finding container 03d8ed5375cc62f702dff4241d823eef8146f996cbdd57965673ada1db1aa2d7: Status 404 returned error can't find the container with id 03d8ed5375cc62f702dff4241d823eef8146f996cbdd57965673ada1db1aa2d7 Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.516945 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4923-account-create-update-rvxh4"] Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.531736 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-baf0-account-create-update-m7shf"] Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.555387 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7v9rk" event={"ID":"eaad67d5-b9bb-42ae-befd-3b8765e8b760","Type":"ContainerStarted","Data":"03d8ed5375cc62f702dff4241d823eef8146f996cbdd57965673ada1db1aa2d7"} Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.568576 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-49x8b" Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.570362 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-49x8b" event={"ID":"89f35430-2cba-4af3-bffb-fe817ccdb2e2","Type":"ContainerDied","Data":"5f0db00c2e612ac8e9fa0628e46b3bb50007e1e05084d1c1cf2a1ceee573bea7"} Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.570484 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f0db00c2e612ac8e9fa0628e46b3bb50007e1e05084d1c1cf2a1ceee573bea7" Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.578524 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c4jml" event={"ID":"188e6f67-6977-4709-bb3a-caf493bcc276","Type":"ContainerStarted","Data":"a76d51afae84b8d5c3c8c6a56970c46a5cb8e92ca9ac07c1b600b2d55f8ec4e8"} Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.591343 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d1f5-account-create-update-8vm9b" event={"ID":"34d38ae5-5cb9-47f3-88f0-818962fed6c1","Type":"ContainerStarted","Data":"96ba3c6c7ab1a879aafe99cefc5574d28a6bb6356600144edcfa05efcac58319"} Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.591409 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d1f5-account-create-update-8vm9b" event={"ID":"34d38ae5-5cb9-47f3-88f0-818962fed6c1","Type":"ContainerStarted","Data":"7ba3d2af54887f5691871aa659cf6a9229aa460be71d8edf287805be5ad4b12c"} Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.631836 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-d1f5-account-create-update-8vm9b" podStartSLOduration=4.631800698 podStartE2EDuration="4.631800698s" podCreationTimestamp="2026-01-29 16:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:38.617047627 +0000 UTC m=+1062.420024891" watchObservedRunningTime="2026-01-29 16:29:38.631800698 +0000 UTC m=+1062.434777962" Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.670222 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wcksm"] Jan 29 16:29:38 crc kubenswrapper[4895]: W0129 16:29:38.683111 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2d83bb5_2939_44e6_a0c5_8bf4893aebda.slice/crio-12daeed957d14ed0c616bf378cc7983847e9381ef75a7997cc72d0431b00b8fc WatchSource:0}: Error finding container 12daeed957d14ed0c616bf378cc7983847e9381ef75a7997cc72d0431b00b8fc: Status 404 returned error can't find the container with id 12daeed957d14ed0c616bf378cc7983847e9381ef75a7997cc72d0431b00b8fc Jan 29 16:29:38 crc kubenswrapper[4895]: I0129 16:29:38.697143 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2lh75"] Jan 29 16:29:38 crc kubenswrapper[4895]: W0129 16:29:38.708029 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fa46325_18c8_48b6_bfe2_d5492f6a5998.slice/crio-39fb25213012e8cdabae13c083051406ddb3dfc33e4bc5a493feb08b347c9852 WatchSource:0}: Error finding container 39fb25213012e8cdabae13c083051406ddb3dfc33e4bc5a493feb08b347c9852: Status 404 returned error can't find the container with id 39fb25213012e8cdabae13c083051406ddb3dfc33e4bc5a493feb08b347c9852 Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.608681 4895 generic.go:334] "Generic (PLEG): container finished" podID="85c63c14-1712-4979-b5f0-abf5a4d4b72a" containerID="8accabf60fcc4324db39482f86e7b1476cec6ce880494834f5da8d22ea5585f1" exitCode=0 Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.608754 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-baf0-account-create-update-m7shf" event={"ID":"85c63c14-1712-4979-b5f0-abf5a4d4b72a","Type":"ContainerDied","Data":"8accabf60fcc4324db39482f86e7b1476cec6ce880494834f5da8d22ea5585f1"} Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.611256 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-baf0-account-create-update-m7shf" event={"ID":"85c63c14-1712-4979-b5f0-abf5a4d4b72a","Type":"ContainerStarted","Data":"3ac8dbbb636c9070a3495419db67d3ba4ccd6c4cec954240458ff5a2168380e7"} Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.615339 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2lh75" event={"ID":"1fa46325-18c8-48b6-bfe2-d5492f6a5998","Type":"ContainerStarted","Data":"39fb25213012e8cdabae13c083051406ddb3dfc33e4bc5a493feb08b347c9852"} Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.623517 4895 generic.go:334] "Generic (PLEG): container finished" podID="34d38ae5-5cb9-47f3-88f0-818962fed6c1" containerID="96ba3c6c7ab1a879aafe99cefc5574d28a6bb6356600144edcfa05efcac58319" exitCode=0 Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.623607 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d1f5-account-create-update-8vm9b" event={"ID":"34d38ae5-5cb9-47f3-88f0-818962fed6c1","Type":"ContainerDied","Data":"96ba3c6c7ab1a879aafe99cefc5574d28a6bb6356600144edcfa05efcac58319"} Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.626708 4895 generic.go:334] "Generic (PLEG): container finished" podID="eaad67d5-b9bb-42ae-befd-3b8765e8b760" containerID="8b18606ccedc201a7d9d72891c392c0e9a525f1d479a63b65df3aa21c2b773a4" exitCode=0 Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.626787 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7v9rk" event={"ID":"eaad67d5-b9bb-42ae-befd-3b8765e8b760","Type":"ContainerDied","Data":"8b18606ccedc201a7d9d72891c392c0e9a525f1d479a63b65df3aa21c2b773a4"} Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.638309 4895 generic.go:334] "Generic (PLEG): container finished" podID="188e6f67-6977-4709-bb3a-caf493bcc276" containerID="16dda0fc19a6b3288e716f995eaccc2d29556874949da98b3d297625b59fc787" exitCode=0 Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.638417 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c4jml" event={"ID":"188e6f67-6977-4709-bb3a-caf493bcc276","Type":"ContainerDied","Data":"16dda0fc19a6b3288e716f995eaccc2d29556874949da98b3d297625b59fc787"} Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.641103 4895 generic.go:334] "Generic (PLEG): container finished" podID="fea59417-b9ef-43f6-b2f1-76d15c6dd51b" containerID="503eea865f11ffafa3cceb35f6b0f21a7488b621da2438b2120604970189eaeb" exitCode=0 Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.641202 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4923-account-create-update-rvxh4" event={"ID":"fea59417-b9ef-43f6-b2f1-76d15c6dd51b","Type":"ContainerDied","Data":"503eea865f11ffafa3cceb35f6b0f21a7488b621da2438b2120604970189eaeb"} Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.641244 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4923-account-create-update-rvxh4" event={"ID":"fea59417-b9ef-43f6-b2f1-76d15c6dd51b","Type":"ContainerStarted","Data":"178356bf4df949dc84f727e5c4c4b46344a8674d27cdafd30c453b16722a32bb"} Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.646259 4895 generic.go:334] "Generic (PLEG): container finished" podID="f2d83bb5-2939-44e6-a0c5-8bf4893aebda" containerID="5ba5e726ef40424a2590c4cf387ed6924800b9c6572ed7b532884cd2f5ca1850" exitCode=0 Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.646353 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wcksm" event={"ID":"f2d83bb5-2939-44e6-a0c5-8bf4893aebda","Type":"ContainerDied","Data":"5ba5e726ef40424a2590c4cf387ed6924800b9c6572ed7b532884cd2f5ca1850"} Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.646406 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wcksm" event={"ID":"f2d83bb5-2939-44e6-a0c5-8bf4893aebda","Type":"ContainerStarted","Data":"12daeed957d14ed0c616bf378cc7983847e9381ef75a7997cc72d0431b00b8fc"} Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.651304 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7pr4f" event={"ID":"cd10e751-7bed-464f-a755-a183b5ed4412","Type":"ContainerStarted","Data":"007ad5d447907115a9942c2166eab6fbbbd96d2723d28430f48cfdde3e3059ef"} Jan 29 16:29:39 crc kubenswrapper[4895]: I0129 16:29:39.752861 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7pr4f" podStartSLOduration=2.993051996 podStartE2EDuration="16.752841822s" podCreationTimestamp="2026-01-29 16:29:23 +0000 UTC" firstStartedPulling="2026-01-29 16:29:24.041069493 +0000 UTC m=+1047.844046757" lastFinishedPulling="2026-01-29 16:29:37.800859319 +0000 UTC m=+1061.603836583" observedRunningTime="2026-01-29 16:29:39.74616294 +0000 UTC m=+1063.549140224" watchObservedRunningTime="2026-01-29 16:29:39.752841822 +0000 UTC m=+1063.555819086" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.375439 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d1f5-account-create-update-8vm9b" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.383282 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c4jml" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.387182 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4923-account-create-update-rvxh4" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.464230 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7v9rk" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.471998 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-baf0-account-create-update-m7shf" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.475837 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wcksm" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.507023 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/188e6f67-6977-4709-bb3a-caf493bcc276-operator-scripts\") pod \"188e6f67-6977-4709-bb3a-caf493bcc276\" (UID: \"188e6f67-6977-4709-bb3a-caf493bcc276\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.507128 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-operator-scripts\") pod \"fea59417-b9ef-43f6-b2f1-76d15c6dd51b\" (UID: \"fea59417-b9ef-43f6-b2f1-76d15c6dd51b\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.507227 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2bb7\" (UniqueName: \"kubernetes.io/projected/188e6f67-6977-4709-bb3a-caf493bcc276-kube-api-access-d2bb7\") pod \"188e6f67-6977-4709-bb3a-caf493bcc276\" (UID: \"188e6f67-6977-4709-bb3a-caf493bcc276\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.507266 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34d38ae5-5cb9-47f3-88f0-818962fed6c1-operator-scripts\") pod \"34d38ae5-5cb9-47f3-88f0-818962fed6c1\" (UID: \"34d38ae5-5cb9-47f3-88f0-818962fed6c1\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.507354 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqpqh\" (UniqueName: \"kubernetes.io/projected/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-kube-api-access-lqpqh\") pod \"fea59417-b9ef-43f6-b2f1-76d15c6dd51b\" (UID: \"fea59417-b9ef-43f6-b2f1-76d15c6dd51b\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.507462 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c4d9\" (UniqueName: \"kubernetes.io/projected/34d38ae5-5cb9-47f3-88f0-818962fed6c1-kube-api-access-4c4d9\") pod \"34d38ae5-5cb9-47f3-88f0-818962fed6c1\" (UID: \"34d38ae5-5cb9-47f3-88f0-818962fed6c1\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.508816 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fea59417-b9ef-43f6-b2f1-76d15c6dd51b" (UID: "fea59417-b9ef-43f6-b2f1-76d15c6dd51b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.508949 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/188e6f67-6977-4709-bb3a-caf493bcc276-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "188e6f67-6977-4709-bb3a-caf493bcc276" (UID: "188e6f67-6977-4709-bb3a-caf493bcc276"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.509016 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34d38ae5-5cb9-47f3-88f0-818962fed6c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34d38ae5-5cb9-47f3-88f0-818962fed6c1" (UID: "34d38ae5-5cb9-47f3-88f0-818962fed6c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.513806 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-kube-api-access-lqpqh" (OuterVolumeSpecName: "kube-api-access-lqpqh") pod "fea59417-b9ef-43f6-b2f1-76d15c6dd51b" (UID: "fea59417-b9ef-43f6-b2f1-76d15c6dd51b"). InnerVolumeSpecName "kube-api-access-lqpqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.517260 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d38ae5-5cb9-47f3-88f0-818962fed6c1-kube-api-access-4c4d9" (OuterVolumeSpecName: "kube-api-access-4c4d9") pod "34d38ae5-5cb9-47f3-88f0-818962fed6c1" (UID: "34d38ae5-5cb9-47f3-88f0-818962fed6c1"). InnerVolumeSpecName "kube-api-access-4c4d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.524979 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/188e6f67-6977-4709-bb3a-caf493bcc276-kube-api-access-d2bb7" (OuterVolumeSpecName: "kube-api-access-d2bb7") pod "188e6f67-6977-4709-bb3a-caf493bcc276" (UID: "188e6f67-6977-4709-bb3a-caf493bcc276"). InnerVolumeSpecName "kube-api-access-d2bb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.608646 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-operator-scripts\") pod \"f2d83bb5-2939-44e6-a0c5-8bf4893aebda\" (UID: \"f2d83bb5-2939-44e6-a0c5-8bf4893aebda\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.609148 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd7zq\" (UniqueName: \"kubernetes.io/projected/eaad67d5-b9bb-42ae-befd-3b8765e8b760-kube-api-access-qd7zq\") pod \"eaad67d5-b9bb-42ae-befd-3b8765e8b760\" (UID: \"eaad67d5-b9bb-42ae-befd-3b8765e8b760\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.609186 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c63c14-1712-4979-b5f0-abf5a4d4b72a-operator-scripts\") pod \"85c63c14-1712-4979-b5f0-abf5a4d4b72a\" (UID: \"85c63c14-1712-4979-b5f0-abf5a4d4b72a\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.609204 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2d83bb5-2939-44e6-a0c5-8bf4893aebda" (UID: "f2d83bb5-2939-44e6-a0c5-8bf4893aebda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.609321 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtgzc\" (UniqueName: \"kubernetes.io/projected/85c63c14-1712-4979-b5f0-abf5a4d4b72a-kube-api-access-rtgzc\") pod \"85c63c14-1712-4979-b5f0-abf5a4d4b72a\" (UID: \"85c63c14-1712-4979-b5f0-abf5a4d4b72a\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.609404 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlfrc\" (UniqueName: \"kubernetes.io/projected/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-kube-api-access-xlfrc\") pod \"f2d83bb5-2939-44e6-a0c5-8bf4893aebda\" (UID: \"f2d83bb5-2939-44e6-a0c5-8bf4893aebda\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.609465 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaad67d5-b9bb-42ae-befd-3b8765e8b760-operator-scripts\") pod \"eaad67d5-b9bb-42ae-befd-3b8765e8b760\" (UID: \"eaad67d5-b9bb-42ae-befd-3b8765e8b760\") " Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.609714 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85c63c14-1712-4979-b5f0-abf5a4d4b72a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85c63c14-1712-4979-b5f0-abf5a4d4b72a" (UID: "85c63c14-1712-4979-b5f0-abf5a4d4b72a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.609908 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqpqh\" (UniqueName: \"kubernetes.io/projected/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-kube-api-access-lqpqh\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.609927 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.609939 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85c63c14-1712-4979-b5f0-abf5a4d4b72a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.610131 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c4d9\" (UniqueName: \"kubernetes.io/projected/34d38ae5-5cb9-47f3-88f0-818962fed6c1-kube-api-access-4c4d9\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.610133 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaad67d5-b9bb-42ae-befd-3b8765e8b760-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eaad67d5-b9bb-42ae-befd-3b8765e8b760" (UID: "eaad67d5-b9bb-42ae-befd-3b8765e8b760"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.610148 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/188e6f67-6977-4709-bb3a-caf493bcc276-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.610160 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fea59417-b9ef-43f6-b2f1-76d15c6dd51b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.610171 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2bb7\" (UniqueName: \"kubernetes.io/projected/188e6f67-6977-4709-bb3a-caf493bcc276-kube-api-access-d2bb7\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.610182 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34d38ae5-5cb9-47f3-88f0-818962fed6c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.614193 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaad67d5-b9bb-42ae-befd-3b8765e8b760-kube-api-access-qd7zq" (OuterVolumeSpecName: "kube-api-access-qd7zq") pod "eaad67d5-b9bb-42ae-befd-3b8765e8b760" (UID: "eaad67d5-b9bb-42ae-befd-3b8765e8b760"). InnerVolumeSpecName "kube-api-access-qd7zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.614391 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-kube-api-access-xlfrc" (OuterVolumeSpecName: "kube-api-access-xlfrc") pod "f2d83bb5-2939-44e6-a0c5-8bf4893aebda" (UID: "f2d83bb5-2939-44e6-a0c5-8bf4893aebda"). InnerVolumeSpecName "kube-api-access-xlfrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.616153 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c63c14-1712-4979-b5f0-abf5a4d4b72a-kube-api-access-rtgzc" (OuterVolumeSpecName: "kube-api-access-rtgzc") pod "85c63c14-1712-4979-b5f0-abf5a4d4b72a" (UID: "85c63c14-1712-4979-b5f0-abf5a4d4b72a"). InnerVolumeSpecName "kube-api-access-rtgzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.700668 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-baf0-account-create-update-m7shf" event={"ID":"85c63c14-1712-4979-b5f0-abf5a4d4b72a","Type":"ContainerDied","Data":"3ac8dbbb636c9070a3495419db67d3ba4ccd6c4cec954240458ff5a2168380e7"} Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.701134 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ac8dbbb636c9070a3495419db67d3ba4ccd6c4cec954240458ff5a2168380e7" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.701293 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-baf0-account-create-update-m7shf" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.705170 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2lh75" event={"ID":"1fa46325-18c8-48b6-bfe2-d5492f6a5998","Type":"ContainerStarted","Data":"af293e0beaeecc17d4c902ea4388e1cfadca2da9982d9567189452e4167228e5"} Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.710060 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d1f5-account-create-update-8vm9b" event={"ID":"34d38ae5-5cb9-47f3-88f0-818962fed6c1","Type":"ContainerDied","Data":"7ba3d2af54887f5691871aa659cf6a9229aa460be71d8edf287805be5ad4b12c"} Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.710108 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba3d2af54887f5691871aa659cf6a9229aa460be71d8edf287805be5ad4b12c" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.710187 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d1f5-account-create-update-8vm9b" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.716661 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtgzc\" (UniqueName: \"kubernetes.io/projected/85c63c14-1712-4979-b5f0-abf5a4d4b72a-kube-api-access-rtgzc\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.716709 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlfrc\" (UniqueName: \"kubernetes.io/projected/f2d83bb5-2939-44e6-a0c5-8bf4893aebda-kube-api-access-xlfrc\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.716723 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaad67d5-b9bb-42ae-befd-3b8765e8b760-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.716738 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd7zq\" (UniqueName: \"kubernetes.io/projected/eaad67d5-b9bb-42ae-befd-3b8765e8b760-kube-api-access-qd7zq\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.718415 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7v9rk" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.718429 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7v9rk" event={"ID":"eaad67d5-b9bb-42ae-befd-3b8765e8b760","Type":"ContainerDied","Data":"03d8ed5375cc62f702dff4241d823eef8146f996cbdd57965673ada1db1aa2d7"} Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.718471 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d8ed5375cc62f702dff4241d823eef8146f996cbdd57965673ada1db1aa2d7" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.720823 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c4jml" event={"ID":"188e6f67-6977-4709-bb3a-caf493bcc276","Type":"ContainerDied","Data":"a76d51afae84b8d5c3c8c6a56970c46a5cb8e92ca9ac07c1b600b2d55f8ec4e8"} Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.720954 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a76d51afae84b8d5c3c8c6a56970c46a5cb8e92ca9ac07c1b600b2d55f8ec4e8" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.721126 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c4jml" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.723953 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4923-account-create-update-rvxh4" event={"ID":"fea59417-b9ef-43f6-b2f1-76d15c6dd51b","Type":"ContainerDied","Data":"178356bf4df949dc84f727e5c4c4b46344a8674d27cdafd30c453b16722a32bb"} Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.723974 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4923-account-create-update-rvxh4" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.723998 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="178356bf4df949dc84f727e5c4c4b46344a8674d27cdafd30c453b16722a32bb" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.726118 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wcksm" event={"ID":"f2d83bb5-2939-44e6-a0c5-8bf4893aebda","Type":"ContainerDied","Data":"12daeed957d14ed0c616bf378cc7983847e9381ef75a7997cc72d0431b00b8fc"} Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.726152 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12daeed957d14ed0c616bf378cc7983847e9381ef75a7997cc72d0431b00b8fc" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.726214 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wcksm" Jan 29 16:29:43 crc kubenswrapper[4895]: I0129 16:29:43.768702 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-2lh75" podStartSLOduration=4.200370042 podStartE2EDuration="8.768662221s" podCreationTimestamp="2026-01-29 16:29:35 +0000 UTC" firstStartedPulling="2026-01-29 16:29:38.722753543 +0000 UTC m=+1062.525730807" lastFinishedPulling="2026-01-29 16:29:43.291045722 +0000 UTC m=+1067.094022986" observedRunningTime="2026-01-29 16:29:43.734028102 +0000 UTC m=+1067.537005376" watchObservedRunningTime="2026-01-29 16:29:43.768662221 +0000 UTC m=+1067.571639495" Jan 29 16:29:54 crc kubenswrapper[4895]: I0129 16:29:54.527230 4895 generic.go:334] "Generic (PLEG): container finished" podID="1fa46325-18c8-48b6-bfe2-d5492f6a5998" containerID="af293e0beaeecc17d4c902ea4388e1cfadca2da9982d9567189452e4167228e5" exitCode=0 Jan 29 16:29:54 crc kubenswrapper[4895]: I0129 16:29:54.527341 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2lh75" event={"ID":"1fa46325-18c8-48b6-bfe2-d5492f6a5998","Type":"ContainerDied","Data":"af293e0beaeecc17d4c902ea4388e1cfadca2da9982d9567189452e4167228e5"} Jan 29 16:29:55 crc kubenswrapper[4895]: I0129 16:29:55.849858 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:55 crc kubenswrapper[4895]: I0129 16:29:55.993164 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-combined-ca-bundle\") pod \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " Jan 29 16:29:55 crc kubenswrapper[4895]: I0129 16:29:55.993357 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-config-data\") pod \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " Jan 29 16:29:55 crc kubenswrapper[4895]: I0129 16:29:55.993474 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gchrt\" (UniqueName: \"kubernetes.io/projected/1fa46325-18c8-48b6-bfe2-d5492f6a5998-kube-api-access-gchrt\") pod \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\" (UID: \"1fa46325-18c8-48b6-bfe2-d5492f6a5998\") " Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.003208 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa46325-18c8-48b6-bfe2-d5492f6a5998-kube-api-access-gchrt" (OuterVolumeSpecName: "kube-api-access-gchrt") pod "1fa46325-18c8-48b6-bfe2-d5492f6a5998" (UID: "1fa46325-18c8-48b6-bfe2-d5492f6a5998"). InnerVolumeSpecName "kube-api-access-gchrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.019141 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fa46325-18c8-48b6-bfe2-d5492f6a5998" (UID: "1fa46325-18c8-48b6-bfe2-d5492f6a5998"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.055026 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-config-data" (OuterVolumeSpecName: "config-data") pod "1fa46325-18c8-48b6-bfe2-d5492f6a5998" (UID: "1fa46325-18c8-48b6-bfe2-d5492f6a5998"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.096611 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gchrt\" (UniqueName: \"kubernetes.io/projected/1fa46325-18c8-48b6-bfe2-d5492f6a5998-kube-api-access-gchrt\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.096664 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.096678 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa46325-18c8-48b6-bfe2-d5492f6a5998-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.548608 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2lh75" event={"ID":"1fa46325-18c8-48b6-bfe2-d5492f6a5998","Type":"ContainerDied","Data":"39fb25213012e8cdabae13c083051406ddb3dfc33e4bc5a493feb08b347c9852"} Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.548671 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39fb25213012e8cdabae13c083051406ddb3dfc33e4bc5a493feb08b347c9852" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.548795 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2lh75" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.759456 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2dh4b"] Jan 29 16:29:56 crc kubenswrapper[4895]: E0129 16:29:56.760305 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c63c14-1712-4979-b5f0-abf5a4d4b72a" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760327 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c63c14-1712-4979-b5f0-abf5a4d4b72a" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: E0129 16:29:56.760349 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f35430-2cba-4af3-bffb-fe817ccdb2e2" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760358 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f35430-2cba-4af3-bffb-fe817ccdb2e2" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: E0129 16:29:56.760374 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa46325-18c8-48b6-bfe2-d5492f6a5998" containerName="keystone-db-sync" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760382 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa46325-18c8-48b6-bfe2-d5492f6a5998" containerName="keystone-db-sync" Jan 29 16:29:56 crc kubenswrapper[4895]: E0129 16:29:56.760392 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188e6f67-6977-4709-bb3a-caf493bcc276" containerName="mariadb-database-create" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760400 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="188e6f67-6977-4709-bb3a-caf493bcc276" containerName="mariadb-database-create" Jan 29 16:29:56 crc kubenswrapper[4895]: E0129 16:29:56.760421 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaad67d5-b9bb-42ae-befd-3b8765e8b760" containerName="mariadb-database-create" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760428 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaad67d5-b9bb-42ae-befd-3b8765e8b760" containerName="mariadb-database-create" Jan 29 16:29:56 crc kubenswrapper[4895]: E0129 16:29:56.760443 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea59417-b9ef-43f6-b2f1-76d15c6dd51b" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760451 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea59417-b9ef-43f6-b2f1-76d15c6dd51b" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: E0129 16:29:56.760469 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d38ae5-5cb9-47f3-88f0-818962fed6c1" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760476 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d38ae5-5cb9-47f3-88f0-818962fed6c1" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: E0129 16:29:56.760488 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d83bb5-2939-44e6-a0c5-8bf4893aebda" containerName="mariadb-database-create" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760498 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d83bb5-2939-44e6-a0c5-8bf4893aebda" containerName="mariadb-database-create" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760692 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f35430-2cba-4af3-bffb-fe817ccdb2e2" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760709 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa46325-18c8-48b6-bfe2-d5492f6a5998" containerName="keystone-db-sync" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760718 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d83bb5-2939-44e6-a0c5-8bf4893aebda" containerName="mariadb-database-create" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760729 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c63c14-1712-4979-b5f0-abf5a4d4b72a" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760739 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d38ae5-5cb9-47f3-88f0-818962fed6c1" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760747 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="188e6f67-6977-4709-bb3a-caf493bcc276" containerName="mariadb-database-create" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760759 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaad67d5-b9bb-42ae-befd-3b8765e8b760" containerName="mariadb-database-create" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.760789 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea59417-b9ef-43f6-b2f1-76d15c6dd51b" containerName="mariadb-account-create-update" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.761502 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.764392 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.764731 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-m7trg" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.765696 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.765751 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.768183 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.768353 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-98lwc"] Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.769775 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.781497 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2dh4b"] Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.800670 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-98lwc"] Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.916698 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.916752 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-config\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.916779 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4c4\" (UniqueName: \"kubernetes.io/projected/8186c562-300e-4a13-9214-7c14c52cc708-kube-api-access-tl4c4\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.916812 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxtx6\" (UniqueName: \"kubernetes.io/projected/6978145e-f04a-4c48-b2a7-648ae699b8d8-kube-api-access-dxtx6\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.916843 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-combined-ca-bundle\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.916877 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-config-data\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.916896 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.916910 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-scripts\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.916943 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-credential-keys\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.916965 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-fernet-keys\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.917013 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.937947 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bv8g6"] Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.947439 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.953203 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.953751 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.954144 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-c444h" Jan 29 16:29:56 crc kubenswrapper[4895]: I0129 16:29:56.958123 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bv8g6"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.019482 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6fv5s"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.020631 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.022469 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-config\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.022497 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl4c4\" (UniqueName: \"kubernetes.io/projected/8186c562-300e-4a13-9214-7c14c52cc708-kube-api-access-tl4c4\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.022529 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxtx6\" (UniqueName: \"kubernetes.io/projected/6978145e-f04a-4c48-b2a7-648ae699b8d8-kube-api-access-dxtx6\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.022557 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-combined-ca-bundle\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.022573 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-config-data\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.022591 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.022608 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-scripts\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.023005 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-credential-keys\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.023104 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-fernet-keys\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.023208 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.023290 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.024081 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.024083 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-config\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.039264 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.039401 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-combined-ca-bundle\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.040169 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.052153 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.052826 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cb56t" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.053040 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.053069 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.053509 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.071601 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-credential-keys\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.072280 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-config-data\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.072366 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-scripts\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.072925 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.089471 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-fernet-keys\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.090516 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.114262 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl4c4\" (UniqueName: \"kubernetes.io/projected/8186c562-300e-4a13-9214-7c14c52cc708-kube-api-access-tl4c4\") pod \"keystone-bootstrap-2dh4b\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.146596 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxtx6\" (UniqueName: \"kubernetes.io/projected/6978145e-f04a-4c48-b2a7-648ae699b8d8-kube-api-access-dxtx6\") pod \"dnsmasq-dns-66fbd85b65-98lwc\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.178224 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.178276 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-config\") pod \"neutron-db-sync-bv8g6\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.178391 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01eead73-2722-45a1-a5f1-fa4522c0041b-etc-machine-id\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.178491 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-config-data\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.178525 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-db-sync-config-data\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.178557 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfhfh\" (UniqueName: \"kubernetes.io/projected/257f0f91-6612-425d-9cff-50bc99ca7979-kube-api-access-zfhfh\") pod \"neutron-db-sync-bv8g6\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.178739 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz8mq\" (UniqueName: \"kubernetes.io/projected/01eead73-2722-45a1-a5f1-fa4522c0041b-kube-api-access-bz8mq\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.178792 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-scripts\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.178847 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-combined-ca-bundle\") pod \"neutron-db-sync-bv8g6\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.180844 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-combined-ca-bundle\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.279644 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.280151 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.287134 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6fv5s"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.288826 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.288919 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz8mq\" (UniqueName: \"kubernetes.io/projected/01eead73-2722-45a1-a5f1-fa4522c0041b-kube-api-access-bz8mq\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.288953 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-scripts\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.288976 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289001 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-run-httpd\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289023 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-scripts\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289048 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-combined-ca-bundle\") pod \"neutron-db-sync-bv8g6\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289077 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-combined-ca-bundle\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289095 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-log-httpd\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289123 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-config\") pod \"neutron-db-sync-bv8g6\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289149 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01eead73-2722-45a1-a5f1-fa4522c0041b-etc-machine-id\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289183 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9zw7\" (UniqueName: \"kubernetes.io/projected/0f28400b-b007-4ee5-a8fb-1aba7192e49f-kube-api-access-r9zw7\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289204 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-config-data\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289228 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-db-sync-config-data\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289246 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfhfh\" (UniqueName: \"kubernetes.io/projected/257f0f91-6612-425d-9cff-50bc99ca7979-kube-api-access-zfhfh\") pod \"neutron-db-sync-bv8g6\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.289281 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-config-data\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.298166 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01eead73-2722-45a1-a5f1-fa4522c0041b-etc-machine-id\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.318800 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.328852 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-mmtp6"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.330256 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.345639 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.346120 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6ksll" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.346957 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.353457 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-config-data\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.361192 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz8mq\" (UniqueName: \"kubernetes.io/projected/01eead73-2722-45a1-a5f1-fa4522c0041b-kube-api-access-bz8mq\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.362060 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-combined-ca-bundle\") pod \"neutron-db-sync-bv8g6\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.364555 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-scripts\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.365664 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-98lwc"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.366533 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.369783 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-config\") pod \"neutron-db-sync-bv8g6\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.369907 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-combined-ca-bundle\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.381668 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfhfh\" (UniqueName: \"kubernetes.io/projected/257f0f91-6612-425d-9cff-50bc99ca7979-kube-api-access-zfhfh\") pod \"neutron-db-sync-bv8g6\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.382036 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-db-sync-config-data\") pod \"cinder-db-sync-6fv5s\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392006 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-config-data\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392060 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392083 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-scripts\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392154 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392174 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-run-httpd\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392194 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-scripts\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392220 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-config-data\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392236 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm6nh\" (UniqueName: \"kubernetes.io/projected/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-kube-api-access-bm6nh\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392269 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-log-httpd\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392296 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-logs\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392332 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9zw7\" (UniqueName: \"kubernetes.io/projected/0f28400b-b007-4ee5-a8fb-1aba7192e49f-kube-api-access-r9zw7\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.392347 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-combined-ca-bundle\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.393698 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-log-httpd\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.397847 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.400177 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-scripts\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.400509 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-run-httpd\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.400612 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mmtp6"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.404667 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.410203 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-m7trg" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.418159 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.429148 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-config-data\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.430076 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.432224 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-6frdn"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.436266 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9zw7\" (UniqueName: \"kubernetes.io/projected/0f28400b-b007-4ee5-a8fb-1aba7192e49f-kube-api-access-r9zw7\") pod \"ceilometer-0\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.467093 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-6frdn"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.467144 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2vnnx"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.467840 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.468280 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.480552 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5nvs4" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.480775 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.485190 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2vnnx"] Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.494117 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.494260 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-combined-ca-bundle\") pod \"barbican-db-sync-2vnnx\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.494337 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-scripts\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.494420 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.494534 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hntsj\" (UniqueName: \"kubernetes.io/projected/4ca03651-86f1-4b94-bdfc-ff182c872873-kube-api-access-hntsj\") pod \"barbican-db-sync-2vnnx\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.494613 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-config-data\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.494687 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm6nh\" (UniqueName: \"kubernetes.io/projected/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-kube-api-access-bm6nh\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.494763 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.494833 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9825\" (UniqueName: \"kubernetes.io/projected/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-kube-api-access-d9825\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.494931 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-config\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.495013 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-logs\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.495090 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-db-sync-config-data\") pod \"barbican-db-sync-2vnnx\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.495170 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-combined-ca-bundle\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.504883 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-logs\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.509167 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-combined-ca-bundle\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.517273 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-config-data\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.520366 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-scripts\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.546608 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm6nh\" (UniqueName: \"kubernetes.io/projected/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-kube-api-access-bm6nh\") pod \"placement-db-sync-mmtp6\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.574200 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-c444h" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.582262 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.583186 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.596369 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hntsj\" (UniqueName: \"kubernetes.io/projected/4ca03651-86f1-4b94-bdfc-ff182c872873-kube-api-access-hntsj\") pod \"barbican-db-sync-2vnnx\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.596409 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.596430 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9825\" (UniqueName: \"kubernetes.io/projected/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-kube-api-access-d9825\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.596456 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-config\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.596492 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-db-sync-config-data\") pod \"barbican-db-sync-2vnnx\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.597068 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.597104 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-combined-ca-bundle\") pod \"barbican-db-sync-2vnnx\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.597140 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.597934 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.598719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.599406 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.606386 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-combined-ca-bundle\") pod \"barbican-db-sync-2vnnx\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.607280 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-config\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.615383 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-db-sync-config-data\") pod \"barbican-db-sync-2vnnx\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.634394 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hntsj\" (UniqueName: \"kubernetes.io/projected/4ca03651-86f1-4b94-bdfc-ff182c872873-kube-api-access-hntsj\") pod \"barbican-db-sync-2vnnx\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.640728 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9825\" (UniqueName: \"kubernetes.io/projected/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-kube-api-access-d9825\") pod \"dnsmasq-dns-6bf59f66bf-6frdn\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.675269 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.734126 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mmtp6" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.807831 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:29:57 crc kubenswrapper[4895]: I0129 16:29:57.820523 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.105259 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-98lwc"] Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.239567 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2dh4b"] Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.294077 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.335231 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6fv5s"] Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.354856 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bv8g6"] Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.368175 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:29:58 crc kubenswrapper[4895]: W0129 16:29:58.394684 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f28400b_b007_4ee5_a8fb_1aba7192e49f.slice/crio-e00af9f4e4c2cd87e8aa0d06d2125381e73f20b6cb4b3aaf583a70691afcb972 WatchSource:0}: Error finding container e00af9f4e4c2cd87e8aa0d06d2125381e73f20b6cb4b3aaf583a70691afcb972: Status 404 returned error can't find the container with id e00af9f4e4c2cd87e8aa0d06d2125381e73f20b6cb4b3aaf583a70691afcb972 Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.535293 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mmtp6"] Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.544364 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2vnnx"] Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.607412 4895 generic.go:334] "Generic (PLEG): container finished" podID="6978145e-f04a-4c48-b2a7-648ae699b8d8" containerID="f83cd097400143c5c99caf97de6eaa8f2ae826a52e3992c7f99ae6f857220868" exitCode=0 Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.607927 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" event={"ID":"6978145e-f04a-4c48-b2a7-648ae699b8d8","Type":"ContainerDied","Data":"f83cd097400143c5c99caf97de6eaa8f2ae826a52e3992c7f99ae6f857220868"} Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.607998 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" event={"ID":"6978145e-f04a-4c48-b2a7-648ae699b8d8","Type":"ContainerStarted","Data":"a622ad279f586c696d293dfab3eac78ae23e16ddbc13d1ac42285bbe3c6f1b6e"} Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.610513 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f28400b-b007-4ee5-a8fb-1aba7192e49f","Type":"ContainerStarted","Data":"e00af9f4e4c2cd87e8aa0d06d2125381e73f20b6cb4b3aaf583a70691afcb972"} Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.618717 4895 generic.go:334] "Generic (PLEG): container finished" podID="cd10e751-7bed-464f-a755-a183b5ed4412" containerID="007ad5d447907115a9942c2166eab6fbbbd96d2723d28430f48cfdde3e3059ef" exitCode=0 Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.618809 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7pr4f" event={"ID":"cd10e751-7bed-464f-a755-a183b5ed4412","Type":"ContainerDied","Data":"007ad5d447907115a9942c2166eab6fbbbd96d2723d28430f48cfdde3e3059ef"} Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.620985 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bv8g6" event={"ID":"257f0f91-6612-425d-9cff-50bc99ca7979","Type":"ContainerStarted","Data":"1c7414b7e594baacd14011088f847ef211c2cddda1d16a0091bd722a15756aee"} Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.623227 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fv5s" event={"ID":"01eead73-2722-45a1-a5f1-fa4522c0041b","Type":"ContainerStarted","Data":"99229e9c46cfeb2bc2d5034d347e2e75de9c5e2023a77fdb0545ff4a4ecf4ae9"} Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.627099 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2dh4b" event={"ID":"8186c562-300e-4a13-9214-7c14c52cc708","Type":"ContainerStarted","Data":"a304339d5db067b4a262c9cd7f5820e9d88808d17383d05950dbb3f99476287e"} Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.670735 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2dh4b" podStartSLOduration=2.670714997 podStartE2EDuration="2.670714997s" podCreationTimestamp="2026-01-29 16:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:58.66453267 +0000 UTC m=+1082.467509944" watchObservedRunningTime="2026-01-29 16:29:58.670714997 +0000 UTC m=+1082.473692261" Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.670989 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-6frdn"] Jan 29 16:29:58 crc kubenswrapper[4895]: I0129 16:29:58.968643 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.134099 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-config\") pod \"6978145e-f04a-4c48-b2a7-648ae699b8d8\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.134429 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxtx6\" (UniqueName: \"kubernetes.io/projected/6978145e-f04a-4c48-b2a7-648ae699b8d8-kube-api-access-dxtx6\") pod \"6978145e-f04a-4c48-b2a7-648ae699b8d8\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.134484 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-sb\") pod \"6978145e-f04a-4c48-b2a7-648ae699b8d8\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.134614 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-nb\") pod \"6978145e-f04a-4c48-b2a7-648ae699b8d8\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.134677 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-dns-svc\") pod \"6978145e-f04a-4c48-b2a7-648ae699b8d8\" (UID: \"6978145e-f04a-4c48-b2a7-648ae699b8d8\") " Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.143047 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6978145e-f04a-4c48-b2a7-648ae699b8d8-kube-api-access-dxtx6" (OuterVolumeSpecName: "kube-api-access-dxtx6") pod "6978145e-f04a-4c48-b2a7-648ae699b8d8" (UID: "6978145e-f04a-4c48-b2a7-648ae699b8d8"). InnerVolumeSpecName "kube-api-access-dxtx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.180654 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-config" (OuterVolumeSpecName: "config") pod "6978145e-f04a-4c48-b2a7-648ae699b8d8" (UID: "6978145e-f04a-4c48-b2a7-648ae699b8d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.182820 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6978145e-f04a-4c48-b2a7-648ae699b8d8" (UID: "6978145e-f04a-4c48-b2a7-648ae699b8d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.193031 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6978145e-f04a-4c48-b2a7-648ae699b8d8" (UID: "6978145e-f04a-4c48-b2a7-648ae699b8d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.238942 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxtx6\" (UniqueName: \"kubernetes.io/projected/6978145e-f04a-4c48-b2a7-648ae699b8d8-kube-api-access-dxtx6\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.239307 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.239320 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.239332 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.250460 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6978145e-f04a-4c48-b2a7-648ae699b8d8" (UID: "6978145e-f04a-4c48-b2a7-648ae699b8d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.294193 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.342340 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6978145e-f04a-4c48-b2a7-648ae699b8d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.656297 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.656276 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-98lwc" event={"ID":"6978145e-f04a-4c48-b2a7-648ae699b8d8","Type":"ContainerDied","Data":"a622ad279f586c696d293dfab3eac78ae23e16ddbc13d1ac42285bbe3c6f1b6e"} Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.656420 4895 scope.go:117] "RemoveContainer" containerID="f83cd097400143c5c99caf97de6eaa8f2ae826a52e3992c7f99ae6f857220868" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.658371 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bv8g6" event={"ID":"257f0f91-6612-425d-9cff-50bc99ca7979","Type":"ContainerStarted","Data":"7bda84e034982ccb40df97d698ecaccef02e743160c5520305f9dbda15e89011"} Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.661968 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mmtp6" event={"ID":"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293","Type":"ContainerStarted","Data":"86021b68a4ccb04367079360f022a2bc2d820dc56f2fbe70017bf7c6ba036db6"} Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.663586 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2dh4b" event={"ID":"8186c562-300e-4a13-9214-7c14c52cc708","Type":"ContainerStarted","Data":"e315919af079baffbb4fee903cce32f2c976ec3eb34c50a727adf60baa77403e"} Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.664956 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2vnnx" event={"ID":"4ca03651-86f1-4b94-bdfc-ff182c872873","Type":"ContainerStarted","Data":"c1229e2a125006e54f82fd953113704b0365b9cce68116fd3244969ed0d223c6"} Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.666291 4895 generic.go:334] "Generic (PLEG): container finished" podID="8509a6da-2b35-44de-a7a2-d2b9df5dcca2" containerID="691f79ee17701032702ffbd1d9a2d5d4c1968ff5390f4ed3567ec786d0007450" exitCode=0 Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.666932 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" event={"ID":"8509a6da-2b35-44de-a7a2-d2b9df5dcca2","Type":"ContainerDied","Data":"691f79ee17701032702ffbd1d9a2d5d4c1968ff5390f4ed3567ec786d0007450"} Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.666953 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" event={"ID":"8509a6da-2b35-44de-a7a2-d2b9df5dcca2","Type":"ContainerStarted","Data":"47519293232a0335059b7bf9cc03e77f6eedb15e0e29f05b323490001d9b4335"} Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.679113 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bv8g6" podStartSLOduration=3.6790951769999998 podStartE2EDuration="3.679095177s" podCreationTimestamp="2026-01-29 16:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:29:59.676500127 +0000 UTC m=+1083.479477401" watchObservedRunningTime="2026-01-29 16:29:59.679095177 +0000 UTC m=+1083.482072441" Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.805436 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-98lwc"] Jan 29 16:29:59 crc kubenswrapper[4895]: I0129 16:29:59.825296 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-98lwc"] Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.168180 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s"] Jan 29 16:30:00 crc kubenswrapper[4895]: E0129 16:30:00.169142 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6978145e-f04a-4c48-b2a7-648ae699b8d8" containerName="init" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.169180 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6978145e-f04a-4c48-b2a7-648ae699b8d8" containerName="init" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.169478 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6978145e-f04a-4c48-b2a7-648ae699b8d8" containerName="init" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.170368 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.173563 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.173779 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.207016 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s"] Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.292338 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thlgn\" (UniqueName: \"kubernetes.io/projected/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-kube-api-access-thlgn\") pod \"collect-profiles-29495070-xsl9s\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.292435 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-config-volume\") pod \"collect-profiles-29495070-xsl9s\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.292478 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-secret-volume\") pod \"collect-profiles-29495070-xsl9s\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.372453 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7pr4f" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.394378 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thlgn\" (UniqueName: \"kubernetes.io/projected/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-kube-api-access-thlgn\") pod \"collect-profiles-29495070-xsl9s\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.394472 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-config-volume\") pod \"collect-profiles-29495070-xsl9s\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.394503 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-secret-volume\") pod \"collect-profiles-29495070-xsl9s\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.396029 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-config-volume\") pod \"collect-profiles-29495070-xsl9s\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.409572 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-secret-volume\") pod \"collect-profiles-29495070-xsl9s\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.417368 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thlgn\" (UniqueName: \"kubernetes.io/projected/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-kube-api-access-thlgn\") pod \"collect-profiles-29495070-xsl9s\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.496125 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-combined-ca-bundle\") pod \"cd10e751-7bed-464f-a755-a183b5ed4412\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.496254 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x7xb\" (UniqueName: \"kubernetes.io/projected/cd10e751-7bed-464f-a755-a183b5ed4412-kube-api-access-5x7xb\") pod \"cd10e751-7bed-464f-a755-a183b5ed4412\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.496323 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-config-data\") pod \"cd10e751-7bed-464f-a755-a183b5ed4412\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.496350 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-db-sync-config-data\") pod \"cd10e751-7bed-464f-a755-a183b5ed4412\" (UID: \"cd10e751-7bed-464f-a755-a183b5ed4412\") " Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.512227 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd10e751-7bed-464f-a755-a183b5ed4412-kube-api-access-5x7xb" (OuterVolumeSpecName: "kube-api-access-5x7xb") pod "cd10e751-7bed-464f-a755-a183b5ed4412" (UID: "cd10e751-7bed-464f-a755-a183b5ed4412"). InnerVolumeSpecName "kube-api-access-5x7xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.512276 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cd10e751-7bed-464f-a755-a183b5ed4412" (UID: "cd10e751-7bed-464f-a755-a183b5ed4412"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.512588 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.530270 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd10e751-7bed-464f-a755-a183b5ed4412" (UID: "cd10e751-7bed-464f-a755-a183b5ed4412"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.549265 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-config-data" (OuterVolumeSpecName: "config-data") pod "cd10e751-7bed-464f-a755-a183b5ed4412" (UID: "cd10e751-7bed-464f-a755-a183b5ed4412"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.602421 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.602479 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x7xb\" (UniqueName: \"kubernetes.io/projected/cd10e751-7bed-464f-a755-a183b5ed4412-kube-api-access-5x7xb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.602495 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.602507 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cd10e751-7bed-464f-a755-a183b5ed4412-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.728725 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" event={"ID":"8509a6da-2b35-44de-a7a2-d2b9df5dcca2","Type":"ContainerStarted","Data":"5a4d44e46e0d9a77d96b71d402ca894dbd6973d61d3d50ae2bbda6a6b2e27ae7"} Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.730051 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.741880 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7pr4f" event={"ID":"cd10e751-7bed-464f-a755-a183b5ed4412","Type":"ContainerDied","Data":"ead18db35821502f68da962bd713e092aacf97d7c649e6555b0f84739298c800"} Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.741942 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ead18db35821502f68da962bd713e092aacf97d7c649e6555b0f84739298c800" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.742400 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7pr4f" Jan 29 16:30:00 crc kubenswrapper[4895]: I0129 16:30:00.782779 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" podStartSLOduration=3.7827522 podStartE2EDuration="3.7827522s" podCreationTimestamp="2026-01-29 16:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:00.764562157 +0000 UTC m=+1084.567539421" watchObservedRunningTime="2026-01-29 16:30:00.7827522 +0000 UTC m=+1084.585729464" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.072671 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6978145e-f04a-4c48-b2a7-648ae699b8d8" path="/var/lib/kubelet/pods/6978145e-f04a-4c48-b2a7-648ae699b8d8/volumes" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.101470 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s"] Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.120276 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-6frdn"] Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.169515 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-dml95"] Jan 29 16:30:01 crc kubenswrapper[4895]: E0129 16:30:01.169943 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd10e751-7bed-464f-a755-a183b5ed4412" containerName="glance-db-sync" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.169956 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd10e751-7bed-464f-a755-a183b5ed4412" containerName="glance-db-sync" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.170151 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd10e751-7bed-464f-a755-a183b5ed4412" containerName="glance-db-sync" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.171052 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.191845 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-dml95"] Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.329211 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.329319 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.329366 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz24j\" (UniqueName: \"kubernetes.io/projected/d48aba20-5109-41c6-93ed-33b3e7536815-kube-api-access-rz24j\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.329461 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.329511 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-config\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.431353 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.431429 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-config\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.431508 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.431542 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.431575 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz24j\" (UniqueName: \"kubernetes.io/projected/d48aba20-5109-41c6-93ed-33b3e7536815-kube-api-access-rz24j\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.433223 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.433857 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.434508 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-config\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.434515 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.480334 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz24j\" (UniqueName: \"kubernetes.io/projected/d48aba20-5109-41c6-93ed-33b3e7536815-kube-api-access-rz24j\") pod \"dnsmasq-dns-5b6dbdb6f5-dml95\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.535029 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.768822 4895 generic.go:334] "Generic (PLEG): container finished" podID="afe15df8-b7a5-40c0-b0d0-5cd2ec699991" containerID="05bcc2984662fca6527da0038d4ae401048308d59901dbed2914cc292d6c31d4" exitCode=0 Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.769318 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" event={"ID":"afe15df8-b7a5-40c0-b0d0-5cd2ec699991","Type":"ContainerDied","Data":"05bcc2984662fca6527da0038d4ae401048308d59901dbed2914cc292d6c31d4"} Jan 29 16:30:01 crc kubenswrapper[4895]: I0129 16:30:01.769384 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" event={"ID":"afe15df8-b7a5-40c0-b0d0-5cd2ec699991","Type":"ContainerStarted","Data":"83f109a49a482a85c4ca9dd641f503cff512351b7f35cfc9adb71e0cda0a3090"} Jan 29 16:30:02 crc kubenswrapper[4895]: I0129 16:30:02.135706 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-dml95"] Jan 29 16:30:02 crc kubenswrapper[4895]: I0129 16:30:02.781313 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" podUID="8509a6da-2b35-44de-a7a2-d2b9df5dcca2" containerName="dnsmasq-dns" containerID="cri-o://5a4d44e46e0d9a77d96b71d402ca894dbd6973d61d3d50ae2bbda6a6b2e27ae7" gracePeriod=10 Jan 29 16:30:03 crc kubenswrapper[4895]: I0129 16:30:03.806363 4895 generic.go:334] "Generic (PLEG): container finished" podID="8509a6da-2b35-44de-a7a2-d2b9df5dcca2" containerID="5a4d44e46e0d9a77d96b71d402ca894dbd6973d61d3d50ae2bbda6a6b2e27ae7" exitCode=0 Jan 29 16:30:03 crc kubenswrapper[4895]: I0129 16:30:03.806536 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" event={"ID":"8509a6da-2b35-44de-a7a2-d2b9df5dcca2","Type":"ContainerDied","Data":"5a4d44e46e0d9a77d96b71d402ca894dbd6973d61d3d50ae2bbda6a6b2e27ae7"} Jan 29 16:30:03 crc kubenswrapper[4895]: I0129 16:30:03.812201 4895 generic.go:334] "Generic (PLEG): container finished" podID="8186c562-300e-4a13-9214-7c14c52cc708" containerID="e315919af079baffbb4fee903cce32f2c976ec3eb34c50a727adf60baa77403e" exitCode=0 Jan 29 16:30:03 crc kubenswrapper[4895]: I0129 16:30:03.812257 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2dh4b" event={"ID":"8186c562-300e-4a13-9214-7c14c52cc708","Type":"ContainerDied","Data":"e315919af079baffbb4fee903cce32f2c976ec3eb34c50a727adf60baa77403e"} Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.840158 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.844492 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.873123 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.896666 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" event={"ID":"afe15df8-b7a5-40c0-b0d0-5cd2ec699991","Type":"ContainerDied","Data":"83f109a49a482a85c4ca9dd641f503cff512351b7f35cfc9adb71e0cda0a3090"} Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.896746 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f109a49a482a85c4ca9dd641f503cff512351b7f35cfc9adb71e0cda0a3090" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.896904 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.902433 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2dh4b" event={"ID":"8186c562-300e-4a13-9214-7c14c52cc708","Type":"ContainerDied","Data":"a304339d5db067b4a262c9cd7f5820e9d88808d17383d05950dbb3f99476287e"} Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.902497 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a304339d5db067b4a262c9cd7f5820e9d88808d17383d05950dbb3f99476287e" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.902617 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2dh4b" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.905271 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" event={"ID":"8509a6da-2b35-44de-a7a2-d2b9df5dcca2","Type":"ContainerDied","Data":"47519293232a0335059b7bf9cc03e77f6eedb15e0e29f05b323490001d9b4335"} Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.905327 4895 scope.go:117] "RemoveContainer" containerID="5a4d44e46e0d9a77d96b71d402ca894dbd6973d61d3d50ae2bbda6a6b2e27ae7" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.905530 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.920686 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" event={"ID":"d48aba20-5109-41c6-93ed-33b3e7536815","Type":"ContainerStarted","Data":"e6afbfa0bcdad981971880e374288d8de47207d17eb07c77b9ddc4c79932906a"} Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964254 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-config-data\") pod \"8186c562-300e-4a13-9214-7c14c52cc708\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964304 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-dns-svc\") pod \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964348 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-credential-keys\") pod \"8186c562-300e-4a13-9214-7c14c52cc708\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964422 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-config\") pod \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964482 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-combined-ca-bundle\") pod \"8186c562-300e-4a13-9214-7c14c52cc708\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964525 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl4c4\" (UniqueName: \"kubernetes.io/projected/8186c562-300e-4a13-9214-7c14c52cc708-kube-api-access-tl4c4\") pod \"8186c562-300e-4a13-9214-7c14c52cc708\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964569 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-scripts\") pod \"8186c562-300e-4a13-9214-7c14c52cc708\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964609 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-sb\") pod \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964655 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-secret-volume\") pod \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964675 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-config-volume\") pod \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964717 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-nb\") pod \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964768 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thlgn\" (UniqueName: \"kubernetes.io/projected/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-kube-api-access-thlgn\") pod \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\" (UID: \"afe15df8-b7a5-40c0-b0d0-5cd2ec699991\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964805 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-fernet-keys\") pod \"8186c562-300e-4a13-9214-7c14c52cc708\" (UID: \"8186c562-300e-4a13-9214-7c14c52cc708\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.964838 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9825\" (UniqueName: \"kubernetes.io/projected/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-kube-api-access-d9825\") pod \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\" (UID: \"8509a6da-2b35-44de-a7a2-d2b9df5dcca2\") " Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.972552 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-kube-api-access-d9825" (OuterVolumeSpecName: "kube-api-access-d9825") pod "8509a6da-2b35-44de-a7a2-d2b9df5dcca2" (UID: "8509a6da-2b35-44de-a7a2-d2b9df5dcca2"). InnerVolumeSpecName "kube-api-access-d9825". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.972727 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8186c562-300e-4a13-9214-7c14c52cc708" (UID: "8186c562-300e-4a13-9214-7c14c52cc708"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.974204 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8186c562-300e-4a13-9214-7c14c52cc708" (UID: "8186c562-300e-4a13-9214-7c14c52cc708"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.976311 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-scripts" (OuterVolumeSpecName: "scripts") pod "8186c562-300e-4a13-9214-7c14c52cc708" (UID: "8186c562-300e-4a13-9214-7c14c52cc708"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.981382 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-config-volume" (OuterVolumeSpecName: "config-volume") pod "afe15df8-b7a5-40c0-b0d0-5cd2ec699991" (UID: "afe15df8-b7a5-40c0-b0d0-5cd2ec699991"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.982570 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "afe15df8-b7a5-40c0-b0d0-5cd2ec699991" (UID: "afe15df8-b7a5-40c0-b0d0-5cd2ec699991"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.985714 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8186c562-300e-4a13-9214-7c14c52cc708-kube-api-access-tl4c4" (OuterVolumeSpecName: "kube-api-access-tl4c4") pod "8186c562-300e-4a13-9214-7c14c52cc708" (UID: "8186c562-300e-4a13-9214-7c14c52cc708"). InnerVolumeSpecName "kube-api-access-tl4c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:10 crc kubenswrapper[4895]: I0129 16:30:10.986249 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-kube-api-access-thlgn" (OuterVolumeSpecName: "kube-api-access-thlgn") pod "afe15df8-b7a5-40c0-b0d0-5cd2ec699991" (UID: "afe15df8-b7a5-40c0-b0d0-5cd2ec699991"). InnerVolumeSpecName "kube-api-access-thlgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.023974 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-config" (OuterVolumeSpecName: "config") pod "8509a6da-2b35-44de-a7a2-d2b9df5dcca2" (UID: "8509a6da-2b35-44de-a7a2-d2b9df5dcca2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.028181 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8509a6da-2b35-44de-a7a2-d2b9df5dcca2" (UID: "8509a6da-2b35-44de-a7a2-d2b9df5dcca2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.028894 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8509a6da-2b35-44de-a7a2-d2b9df5dcca2" (UID: "8509a6da-2b35-44de-a7a2-d2b9df5dcca2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.036716 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8186c562-300e-4a13-9214-7c14c52cc708" (UID: "8186c562-300e-4a13-9214-7c14c52cc708"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.041804 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-config-data" (OuterVolumeSpecName: "config-data") pod "8186c562-300e-4a13-9214-7c14c52cc708" (UID: "8186c562-300e-4a13-9214-7c14c52cc708"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.042383 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8509a6da-2b35-44de-a7a2-d2b9df5dcca2" (UID: "8509a6da-2b35-44de-a7a2-d2b9df5dcca2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067682 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067727 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067742 4895 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067756 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067768 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067780 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl4c4\" (UniqueName: \"kubernetes.io/projected/8186c562-300e-4a13-9214-7c14c52cc708-kube-api-access-tl4c4\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067792 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067803 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067818 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067845 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067856 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.067866 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thlgn\" (UniqueName: \"kubernetes.io/projected/afe15df8-b7a5-40c0-b0d0-5cd2ec699991-kube-api-access-thlgn\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.071032 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8186c562-300e-4a13-9214-7c14c52cc708-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.071111 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9825\" (UniqueName: \"kubernetes.io/projected/8509a6da-2b35-44de-a7a2-d2b9df5dcca2-kube-api-access-d9825\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.339915 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-6frdn"] Jan 29 16:30:11 crc kubenswrapper[4895]: I0129 16:30:11.349179 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-6frdn"] Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.055931 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2dh4b"] Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.068134 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2dh4b"] Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.167080 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7s9sc"] Jan 29 16:30:12 crc kubenswrapper[4895]: E0129 16:30:12.167520 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509a6da-2b35-44de-a7a2-d2b9df5dcca2" containerName="dnsmasq-dns" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.167536 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509a6da-2b35-44de-a7a2-d2b9df5dcca2" containerName="dnsmasq-dns" Jan 29 16:30:12 crc kubenswrapper[4895]: E0129 16:30:12.167560 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe15df8-b7a5-40c0-b0d0-5cd2ec699991" containerName="collect-profiles" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.167567 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe15df8-b7a5-40c0-b0d0-5cd2ec699991" containerName="collect-profiles" Jan 29 16:30:12 crc kubenswrapper[4895]: E0129 16:30:12.167588 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509a6da-2b35-44de-a7a2-d2b9df5dcca2" containerName="init" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.167596 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509a6da-2b35-44de-a7a2-d2b9df5dcca2" containerName="init" Jan 29 16:30:12 crc kubenswrapper[4895]: E0129 16:30:12.167614 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8186c562-300e-4a13-9214-7c14c52cc708" containerName="keystone-bootstrap" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.167621 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8186c562-300e-4a13-9214-7c14c52cc708" containerName="keystone-bootstrap" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.167777 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8186c562-300e-4a13-9214-7c14c52cc708" containerName="keystone-bootstrap" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.167786 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8509a6da-2b35-44de-a7a2-d2b9df5dcca2" containerName="dnsmasq-dns" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.167798 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe15df8-b7a5-40c0-b0d0-5cd2ec699991" containerName="collect-profiles" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.168442 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.174808 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.175147 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-m7trg" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.175301 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.175682 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.176997 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7s9sc"] Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.177709 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.302192 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-combined-ca-bundle\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.302246 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-fernet-keys\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.302295 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-scripts\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.302423 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-config-data\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.302510 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfbw\" (UniqueName: \"kubernetes.io/projected/59598724-bed0-4b4c-9957-6c282df5b4a5-kube-api-access-jdfbw\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.302535 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-credential-keys\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.404701 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfbw\" (UniqueName: \"kubernetes.io/projected/59598724-bed0-4b4c-9957-6c282df5b4a5-kube-api-access-jdfbw\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.404760 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-credential-keys\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.404794 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-combined-ca-bundle\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.404818 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-fernet-keys\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.404881 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-scripts\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.404968 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-config-data\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.412671 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-scripts\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.413054 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-fernet-keys\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.413416 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-config-data\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.414142 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-combined-ca-bundle\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.415364 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-credential-keys\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.423788 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfbw\" (UniqueName: \"kubernetes.io/projected/59598724-bed0-4b4c-9957-6c282df5b4a5-kube-api-access-jdfbw\") pod \"keystone-bootstrap-7s9sc\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.486701 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:12 crc kubenswrapper[4895]: I0129 16:30:12.822469 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bf59f66bf-6frdn" podUID="8509a6da-2b35-44de-a7a2-d2b9df5dcca2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Jan 29 16:30:13 crc kubenswrapper[4895]: I0129 16:30:13.049313 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8186c562-300e-4a13-9214-7c14c52cc708" path="/var/lib/kubelet/pods/8186c562-300e-4a13-9214-7c14c52cc708/volumes" Jan 29 16:30:13 crc kubenswrapper[4895]: I0129 16:30:13.050200 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8509a6da-2b35-44de-a7a2-d2b9df5dcca2" path="/var/lib/kubelet/pods/8509a6da-2b35-44de-a7a2-d2b9df5dcca2/volumes" Jan 29 16:30:21 crc kubenswrapper[4895]: I0129 16:30:21.010762 4895 scope.go:117] "RemoveContainer" containerID="691f79ee17701032702ffbd1d9a2d5d4c1968ff5390f4ed3567ec786d0007450" Jan 29 16:30:22 crc kubenswrapper[4895]: E0129 16:30:22.278536 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 29 16:30:22 crc kubenswrapper[4895]: E0129 16:30:22.278889 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6fv5s_openstack(01eead73-2722-45a1-a5f1-fa4522c0041b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:30:22 crc kubenswrapper[4895]: E0129 16:30:22.280046 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6fv5s" podUID="01eead73-2722-45a1-a5f1-fa4522c0041b" Jan 29 16:30:22 crc kubenswrapper[4895]: W0129 16:30:22.749457 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59598724_bed0_4b4c_9957_6c282df5b4a5.slice/crio-90a2f70fb6d13b54f954ac1b338552a805adbd8b5234aa118f58b6ffdd46e0ac WatchSource:0}: Error finding container 90a2f70fb6d13b54f954ac1b338552a805adbd8b5234aa118f58b6ffdd46e0ac: Status 404 returned error can't find the container with id 90a2f70fb6d13b54f954ac1b338552a805adbd8b5234aa118f58b6ffdd46e0ac Jan 29 16:30:22 crc kubenswrapper[4895]: I0129 16:30:22.755508 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7s9sc"] Jan 29 16:30:23 crc kubenswrapper[4895]: I0129 16:30:23.088567 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7s9sc" event={"ID":"59598724-bed0-4b4c-9957-6c282df5b4a5","Type":"ContainerStarted","Data":"43771ead4d0270ea5be9d2e1d7a0fa9fd7b0714ecda6af8c8c5321a8a71efc8d"} Jan 29 16:30:23 crc kubenswrapper[4895]: I0129 16:30:23.089042 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7s9sc" event={"ID":"59598724-bed0-4b4c-9957-6c282df5b4a5","Type":"ContainerStarted","Data":"90a2f70fb6d13b54f954ac1b338552a805adbd8b5234aa118f58b6ffdd46e0ac"} Jan 29 16:30:23 crc kubenswrapper[4895]: I0129 16:30:23.107475 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2vnnx" event={"ID":"4ca03651-86f1-4b94-bdfc-ff182c872873","Type":"ContainerStarted","Data":"817136ba4b1e589887ea981ba1282ee97d636e6e1ba7eaa8b8e35ff17d55e463"} Jan 29 16:30:23 crc kubenswrapper[4895]: I0129 16:30:23.136695 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f28400b-b007-4ee5-a8fb-1aba7192e49f","Type":"ContainerStarted","Data":"b75c88295c405fd70ac7a4d8e86b83a727b5b8002469318151dfd82bf2467b99"} Jan 29 16:30:23 crc kubenswrapper[4895]: I0129 16:30:23.145992 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7s9sc" podStartSLOduration=11.145970043 podStartE2EDuration="11.145970043s" podCreationTimestamp="2026-01-29 16:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:23.140539866 +0000 UTC m=+1106.943517130" watchObservedRunningTime="2026-01-29 16:30:23.145970043 +0000 UTC m=+1106.948947307" Jan 29 16:30:23 crc kubenswrapper[4895]: I0129 16:30:23.155063 4895 generic.go:334] "Generic (PLEG): container finished" podID="d48aba20-5109-41c6-93ed-33b3e7536815" containerID="dcecb26c5f504bff62d75772f6e30884891cac84d249aa6f61ecaddd3cf3970f" exitCode=0 Jan 29 16:30:23 crc kubenswrapper[4895]: I0129 16:30:23.155253 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" event={"ID":"d48aba20-5109-41c6-93ed-33b3e7536815","Type":"ContainerDied","Data":"dcecb26c5f504bff62d75772f6e30884891cac84d249aa6f61ecaddd3cf3970f"} Jan 29 16:30:23 crc kubenswrapper[4895]: I0129 16:30:23.183139 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2vnnx" podStartSLOduration=2.54462372 podStartE2EDuration="26.18311013s" podCreationTimestamp="2026-01-29 16:29:57 +0000 UTC" firstStartedPulling="2026-01-29 16:29:58.619057616 +0000 UTC m=+1082.422034880" lastFinishedPulling="2026-01-29 16:30:22.257544026 +0000 UTC m=+1106.060521290" observedRunningTime="2026-01-29 16:30:23.181115606 +0000 UTC m=+1106.984092870" watchObservedRunningTime="2026-01-29 16:30:23.18311013 +0000 UTC m=+1106.986087404" Jan 29 16:30:23 crc kubenswrapper[4895]: I0129 16:30:23.190945 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mmtp6" event={"ID":"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293","Type":"ContainerStarted","Data":"f30403d180e1aefa552e8e24e6e15ea26d4837b48e94805a66494cd91129f917"} Jan 29 16:30:23 crc kubenswrapper[4895]: E0129 16:30:23.199179 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-6fv5s" podUID="01eead73-2722-45a1-a5f1-fa4522c0041b" Jan 29 16:30:23 crc kubenswrapper[4895]: I0129 16:30:23.320167 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-mmtp6" podStartSLOduration=2.702059073 podStartE2EDuration="26.320140488s" podCreationTimestamp="2026-01-29 16:29:57 +0000 UTC" firstStartedPulling="2026-01-29 16:29:58.616629801 +0000 UTC m=+1082.419607065" lastFinishedPulling="2026-01-29 16:30:22.234711216 +0000 UTC m=+1106.037688480" observedRunningTime="2026-01-29 16:30:23.309295183 +0000 UTC m=+1107.112272457" watchObservedRunningTime="2026-01-29 16:30:23.320140488 +0000 UTC m=+1107.123117752" Jan 29 16:30:24 crc kubenswrapper[4895]: I0129 16:30:24.213492 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" event={"ID":"d48aba20-5109-41c6-93ed-33b3e7536815","Type":"ContainerStarted","Data":"3c6385b0b1bbda766365e1e435d883bc847654f869db3e4383e5c9fe53f98eaa"} Jan 29 16:30:24 crc kubenswrapper[4895]: I0129 16:30:24.253892 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" podStartSLOduration=23.253860493 podStartE2EDuration="23.253860493s" podCreationTimestamp="2026-01-29 16:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:24.249946937 +0000 UTC m=+1108.052924221" watchObservedRunningTime="2026-01-29 16:30:24.253860493 +0000 UTC m=+1108.056837757" Jan 29 16:30:25 crc kubenswrapper[4895]: I0129 16:30:25.227814 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f28400b-b007-4ee5-a8fb-1aba7192e49f","Type":"ContainerStarted","Data":"76d21573eebe5cf08b57bcc3beaa9fc60df4fd775279f37dd755c76bc85ebc5c"} Jan 29 16:30:25 crc kubenswrapper[4895]: I0129 16:30:25.228332 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:27 crc kubenswrapper[4895]: I0129 16:30:27.249650 4895 generic.go:334] "Generic (PLEG): container finished" podID="59598724-bed0-4b4c-9957-6c282df5b4a5" containerID="43771ead4d0270ea5be9d2e1d7a0fa9fd7b0714ecda6af8c8c5321a8a71efc8d" exitCode=0 Jan 29 16:30:27 crc kubenswrapper[4895]: I0129 16:30:27.249810 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7s9sc" event={"ID":"59598724-bed0-4b4c-9957-6c282df5b4a5","Type":"ContainerDied","Data":"43771ead4d0270ea5be9d2e1d7a0fa9fd7b0714ecda6af8c8c5321a8a71efc8d"} Jan 29 16:30:28 crc kubenswrapper[4895]: I0129 16:30:28.262738 4895 generic.go:334] "Generic (PLEG): container finished" podID="0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293" containerID="f30403d180e1aefa552e8e24e6e15ea26d4837b48e94805a66494cd91129f917" exitCode=0 Jan 29 16:30:28 crc kubenswrapper[4895]: I0129 16:30:28.262840 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mmtp6" event={"ID":"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293","Type":"ContainerDied","Data":"f30403d180e1aefa552e8e24e6e15ea26d4837b48e94805a66494cd91129f917"} Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.276201 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7s9sc" event={"ID":"59598724-bed0-4b4c-9957-6c282df5b4a5","Type":"ContainerDied","Data":"90a2f70fb6d13b54f954ac1b338552a805adbd8b5234aa118f58b6ffdd46e0ac"} Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.276719 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90a2f70fb6d13b54f954ac1b338552a805adbd8b5234aa118f58b6ffdd46e0ac" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.278552 4895 generic.go:334] "Generic (PLEG): container finished" podID="4ca03651-86f1-4b94-bdfc-ff182c872873" containerID="817136ba4b1e589887ea981ba1282ee97d636e6e1ba7eaa8b8e35ff17d55e463" exitCode=0 Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.278643 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2vnnx" event={"ID":"4ca03651-86f1-4b94-bdfc-ff182c872873","Type":"ContainerDied","Data":"817136ba4b1e589887ea981ba1282ee97d636e6e1ba7eaa8b8e35ff17d55e463"} Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.368428 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.464566 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-combined-ca-bundle\") pod \"59598724-bed0-4b4c-9957-6c282df5b4a5\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.464620 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-scripts\") pod \"59598724-bed0-4b4c-9957-6c282df5b4a5\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.464649 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-fernet-keys\") pod \"59598724-bed0-4b4c-9957-6c282df5b4a5\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.464731 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-credential-keys\") pod \"59598724-bed0-4b4c-9957-6c282df5b4a5\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.468032 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-config-data\") pod \"59598724-bed0-4b4c-9957-6c282df5b4a5\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.468092 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdfbw\" (UniqueName: \"kubernetes.io/projected/59598724-bed0-4b4c-9957-6c282df5b4a5-kube-api-access-jdfbw\") pod \"59598724-bed0-4b4c-9957-6c282df5b4a5\" (UID: \"59598724-bed0-4b4c-9957-6c282df5b4a5\") " Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.472003 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "59598724-bed0-4b4c-9957-6c282df5b4a5" (UID: "59598724-bed0-4b4c-9957-6c282df5b4a5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.526762 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.527503 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "59598724-bed0-4b4c-9957-6c282df5b4a5" (UID: "59598724-bed0-4b4c-9957-6c282df5b4a5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.528385 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-scripts" (OuterVolumeSpecName: "scripts") pod "59598724-bed0-4b4c-9957-6c282df5b4a5" (UID: "59598724-bed0-4b4c-9957-6c282df5b4a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.528901 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59598724-bed0-4b4c-9957-6c282df5b4a5-kube-api-access-jdfbw" (OuterVolumeSpecName: "kube-api-access-jdfbw") pod "59598724-bed0-4b4c-9957-6c282df5b4a5" (UID: "59598724-bed0-4b4c-9957-6c282df5b4a5"). InnerVolumeSpecName "kube-api-access-jdfbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.532314 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-config-data" (OuterVolumeSpecName: "config-data") pod "59598724-bed0-4b4c-9957-6c282df5b4a5" (UID: "59598724-bed0-4b4c-9957-6c282df5b4a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.534735 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59598724-bed0-4b4c-9957-6c282df5b4a5" (UID: "59598724-bed0-4b4c-9957-6c282df5b4a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.628551 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.628602 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.628619 4895 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.628630 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59598724-bed0-4b4c-9957-6c282df5b4a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.628642 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdfbw\" (UniqueName: \"kubernetes.io/projected/59598724-bed0-4b4c-9957-6c282df5b4a5-kube-api-access-jdfbw\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.633089 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mmtp6" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.730163 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-combined-ca-bundle\") pod \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.730731 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-logs\") pod \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.730897 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-config-data\") pod \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.730931 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-scripts\") pod \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.730953 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm6nh\" (UniqueName: \"kubernetes.io/projected/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-kube-api-access-bm6nh\") pod \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\" (UID: \"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293\") " Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.732162 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-logs" (OuterVolumeSpecName: "logs") pod "0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293" (UID: "0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.734717 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-scripts" (OuterVolumeSpecName: "scripts") pod "0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293" (UID: "0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.737558 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-kube-api-access-bm6nh" (OuterVolumeSpecName: "kube-api-access-bm6nh") pod "0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293" (UID: "0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293"). InnerVolumeSpecName "kube-api-access-bm6nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.758785 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293" (UID: "0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.761210 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-config-data" (OuterVolumeSpecName: "config-data") pod "0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293" (UID: "0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:29 crc kubenswrapper[4895]: E0129 16:30:29.788477 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:30:29 crc kubenswrapper[4895]: E0129 16:30:29.788750 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9zw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0f28400b-b007-4ee5-a8fb-1aba7192e49f): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:30:29 crc kubenswrapper[4895]: E0129 16:30:29.789976 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.839732 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.839821 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.839837 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.839848 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm6nh\" (UniqueName: \"kubernetes.io/projected/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-kube-api-access-bm6nh\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:29 crc kubenswrapper[4895]: I0129 16:30:29.839860 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.291290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mmtp6" event={"ID":"0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293","Type":"ContainerDied","Data":"86021b68a4ccb04367079360f022a2bc2d820dc56f2fbe70017bf7c6ba036db6"} Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.291340 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86021b68a4ccb04367079360f022a2bc2d820dc56f2fbe70017bf7c6ba036db6" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.291414 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mmtp6" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.294111 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7s9sc" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.294274 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f28400b-b007-4ee5-a8fb-1aba7192e49f","Type":"ContainerStarted","Data":"8f199e65908344d6eee44c53800895df4e18a5dda3d49ec94347e19eedda7bff"} Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.295627 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="sg-core" containerID="cri-o://8f199e65908344d6eee44c53800895df4e18a5dda3d49ec94347e19eedda7bff" gracePeriod=30 Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.295765 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="ceilometer-notification-agent" containerID="cri-o://76d21573eebe5cf08b57bcc3beaa9fc60df4fd775279f37dd755c76bc85ebc5c" gracePeriod=30 Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.295957 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="ceilometer-central-agent" containerID="cri-o://b75c88295c405fd70ac7a4d8e86b83a727b5b8002469318151dfd82bf2467b99" gracePeriod=30 Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.448448 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-84bb8677c6-sfxjh"] Jan 29 16:30:30 crc kubenswrapper[4895]: E0129 16:30:30.448920 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59598724-bed0-4b4c-9957-6c282df5b4a5" containerName="keystone-bootstrap" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.448936 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="59598724-bed0-4b4c-9957-6c282df5b4a5" containerName="keystone-bootstrap" Jan 29 16:30:30 crc kubenswrapper[4895]: E0129 16:30:30.448950 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293" containerName="placement-db-sync" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.448959 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293" containerName="placement-db-sync" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.449128 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293" containerName="placement-db-sync" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.449147 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="59598724-bed0-4b4c-9957-6c282df5b4a5" containerName="keystone-bootstrap" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.450424 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.454299 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.455537 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.455845 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.456051 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6ksll" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.459687 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.468531 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c992c27a-1165-4c22-99e3-67bee151dfb4-logs\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.468588 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-config-data\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.468619 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-internal-tls-certs\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.468639 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-scripts\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.468666 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdj4f\" (UniqueName: \"kubernetes.io/projected/c992c27a-1165-4c22-99e3-67bee151dfb4-kube-api-access-mdj4f\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.468698 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-combined-ca-bundle\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.468756 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-public-tls-certs\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.501255 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84bb8677c6-sfxjh"] Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.565698 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7f897d48fd-hgqsw"] Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.567323 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.570729 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-internal-tls-certs\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.570787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-scripts\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.570841 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdj4f\" (UniqueName: \"kubernetes.io/projected/c992c27a-1165-4c22-99e3-67bee151dfb4-kube-api-access-mdj4f\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.570982 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-combined-ca-bundle\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.571217 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-public-tls-certs\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.571318 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c992c27a-1165-4c22-99e3-67bee151dfb4-logs\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.571378 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-config-data\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.577410 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.577496 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.577988 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.578238 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.578470 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-m7trg" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.578677 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.579682 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c992c27a-1165-4c22-99e3-67bee151dfb4-logs\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.581940 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-scripts\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.597735 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-public-tls-certs\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.601701 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdj4f\" (UniqueName: \"kubernetes.io/projected/c992c27a-1165-4c22-99e3-67bee151dfb4-kube-api-access-mdj4f\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.627265 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-internal-tls-certs\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.629569 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-combined-ca-bundle\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.631203 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-config-data\") pod \"placement-84bb8677c6-sfxjh\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.649575 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f897d48fd-hgqsw"] Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.673327 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-credential-keys\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.673437 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-fernet-keys\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.673481 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjw9p\" (UniqueName: \"kubernetes.io/projected/1971cd12-642f-4a58-917d-4dda4953854a-kube-api-access-bjw9p\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.673540 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-combined-ca-bundle\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.673560 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-scripts\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.673577 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-internal-tls-certs\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.673595 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-config-data\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.673627 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-public-tls-certs\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.775055 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-combined-ca-bundle\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.775103 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-scripts\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.775127 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-internal-tls-certs\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.775153 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-config-data\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.775193 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-public-tls-certs\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.775216 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-credential-keys\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.775264 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-fernet-keys\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.775308 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjw9p\" (UniqueName: \"kubernetes.io/projected/1971cd12-642f-4a58-917d-4dda4953854a-kube-api-access-bjw9p\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.781451 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.787992 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-scripts\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.788079 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-config-data\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.788109 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-public-tls-certs\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.788359 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-fernet-keys\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.788453 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-combined-ca-bundle\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.788638 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-credential-keys\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.788983 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1971cd12-642f-4a58-917d-4dda4953854a-internal-tls-certs\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.794592 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.799403 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjw9p\" (UniqueName: \"kubernetes.io/projected/1971cd12-642f-4a58-917d-4dda4953854a-kube-api-access-bjw9p\") pod \"keystone-7f897d48fd-hgqsw\" (UID: \"1971cd12-642f-4a58-917d-4dda4953854a\") " pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.980670 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-combined-ca-bundle\") pod \"4ca03651-86f1-4b94-bdfc-ff182c872873\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.981223 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hntsj\" (UniqueName: \"kubernetes.io/projected/4ca03651-86f1-4b94-bdfc-ff182c872873-kube-api-access-hntsj\") pod \"4ca03651-86f1-4b94-bdfc-ff182c872873\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.981345 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-db-sync-config-data\") pod \"4ca03651-86f1-4b94-bdfc-ff182c872873\" (UID: \"4ca03651-86f1-4b94-bdfc-ff182c872873\") " Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.987745 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4ca03651-86f1-4b94-bdfc-ff182c872873" (UID: "4ca03651-86f1-4b94-bdfc-ff182c872873"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:30 crc kubenswrapper[4895]: I0129 16:30:30.987834 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ca03651-86f1-4b94-bdfc-ff182c872873-kube-api-access-hntsj" (OuterVolumeSpecName: "kube-api-access-hntsj") pod "4ca03651-86f1-4b94-bdfc-ff182c872873" (UID: "4ca03651-86f1-4b94-bdfc-ff182c872873"). InnerVolumeSpecName "kube-api-access-hntsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.017309 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ca03651-86f1-4b94-bdfc-ff182c872873" (UID: "4ca03651-86f1-4b94-bdfc-ff182c872873"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.083553 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hntsj\" (UniqueName: \"kubernetes.io/projected/4ca03651-86f1-4b94-bdfc-ff182c872873-kube-api-access-hntsj\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.083593 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.083607 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca03651-86f1-4b94-bdfc-ff182c872873-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.089481 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.324582 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2vnnx" event={"ID":"4ca03651-86f1-4b94-bdfc-ff182c872873","Type":"ContainerDied","Data":"c1229e2a125006e54f82fd953113704b0365b9cce68116fd3244969ed0d223c6"} Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.324633 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1229e2a125006e54f82fd953113704b0365b9cce68116fd3244969ed0d223c6" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.324639 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2vnnx" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.326729 4895 generic.go:334] "Generic (PLEG): container finished" podID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerID="8f199e65908344d6eee44c53800895df4e18a5dda3d49ec94347e19eedda7bff" exitCode=2 Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.326753 4895 generic.go:334] "Generic (PLEG): container finished" podID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerID="76d21573eebe5cf08b57bcc3beaa9fc60df4fd775279f37dd755c76bc85ebc5c" exitCode=0 Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.326763 4895 generic.go:334] "Generic (PLEG): container finished" podID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerID="b75c88295c405fd70ac7a4d8e86b83a727b5b8002469318151dfd82bf2467b99" exitCode=0 Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.326792 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f28400b-b007-4ee5-a8fb-1aba7192e49f","Type":"ContainerDied","Data":"8f199e65908344d6eee44c53800895df4e18a5dda3d49ec94347e19eedda7bff"} Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.326819 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f28400b-b007-4ee5-a8fb-1aba7192e49f","Type":"ContainerDied","Data":"76d21573eebe5cf08b57bcc3beaa9fc60df4fd775279f37dd755c76bc85ebc5c"} Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.326830 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f28400b-b007-4ee5-a8fb-1aba7192e49f","Type":"ContainerDied","Data":"b75c88295c405fd70ac7a4d8e86b83a727b5b8002469318151dfd82bf2467b99"} Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.328398 4895 generic.go:334] "Generic (PLEG): container finished" podID="257f0f91-6612-425d-9cff-50bc99ca7979" containerID="7bda84e034982ccb40df97d698ecaccef02e743160c5520305f9dbda15e89011" exitCode=0 Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.328454 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bv8g6" event={"ID":"257f0f91-6612-425d-9cff-50bc99ca7979","Type":"ContainerDied","Data":"7bda84e034982ccb40df97d698ecaccef02e743160c5520305f9dbda15e89011"} Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.342924 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84bb8677c6-sfxjh"] Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.537011 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.621610 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-556f55978-6k8pl"] Jan 29 16:30:31 crc kubenswrapper[4895]: E0129 16:30:31.622087 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca03651-86f1-4b94-bdfc-ff182c872873" containerName="barbican-db-sync" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.622101 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca03651-86f1-4b94-bdfc-ff182c872873" containerName="barbican-db-sync" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.622253 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ca03651-86f1-4b94-bdfc-ff182c872873" containerName="barbican-db-sync" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.623193 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.629338 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.629610 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.632007 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5nvs4" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.642104 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-556f55978-6k8pl"] Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.666269 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-n64hc"] Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.666614 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-n64hc" podUID="47b279e9-73f3-444b-bf95-d97f9cc546ae" containerName="dnsmasq-dns" containerID="cri-o://aa4f271dba09a2a5626c37ee9688bb9605e2bb4d346d2a7baeb8e10905204595" gracePeriod=10 Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.681314 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.695499 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6b5b766675-prdvb"] Jan 29 16:30:31 crc kubenswrapper[4895]: E0129 16:30:31.695931 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="ceilometer-central-agent" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.695951 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="ceilometer-central-agent" Jan 29 16:30:31 crc kubenswrapper[4895]: E0129 16:30:31.695970 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="ceilometer-notification-agent" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.695977 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="ceilometer-notification-agent" Jan 29 16:30:31 crc kubenswrapper[4895]: E0129 16:30:31.696023 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="sg-core" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.696030 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="sg-core" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.696164 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="ceilometer-notification-agent" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.696184 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="ceilometer-central-agent" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.696193 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" containerName="sg-core" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.697200 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.707559 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.796350 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f897d48fd-hgqsw"] Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.818093 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b5b766675-prdvb"] Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.839470 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-sqc4l"] Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.840982 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-sqc4l"] Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.841072 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.846261 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-scripts\") pod \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.846426 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-run-httpd\") pod \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.846566 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9zw7\" (UniqueName: \"kubernetes.io/projected/0f28400b-b007-4ee5-a8fb-1aba7192e49f-kube-api-access-r9zw7\") pod \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.846697 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-sg-core-conf-yaml\") pod \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.846822 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-log-httpd\") pod \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.847042 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-combined-ca-bundle\") pod \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.847181 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-config-data\") pod \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\" (UID: \"0f28400b-b007-4ee5-a8fb-1aba7192e49f\") " Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.854233 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0f28400b-b007-4ee5-a8fb-1aba7192e49f" (UID: "0f28400b-b007-4ee5-a8fb-1aba7192e49f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.861157 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0f28400b-b007-4ee5-a8fb-1aba7192e49f" (UID: "0f28400b-b007-4ee5-a8fb-1aba7192e49f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.871232 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4684685c-78bd-4773-ba6d-7e663bb1ea19-config-data\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.871407 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxqnw\" (UniqueName: \"kubernetes.io/projected/fc68545b-8e7a-4b48-86f1-86b5e188672d-kube-api-access-zxqnw\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.871609 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbmx\" (UniqueName: \"kubernetes.io/projected/fc11f864-e186-4067-a521-1357781f6e78-kube-api-access-jxbmx\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.872516 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc68545b-8e7a-4b48-86f1-86b5e188672d-logs\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.872739 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc68545b-8e7a-4b48-86f1-86b5e188672d-combined-ca-bundle\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.872880 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4684685c-78bd-4773-ba6d-7e663bb1ea19-logs\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.873070 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-dns-svc\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.875616 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.875766 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-config\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.875900 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc68545b-8e7a-4b48-86f1-86b5e188672d-config-data\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.876033 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.876304 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2brp\" (UniqueName: \"kubernetes.io/projected/4684685c-78bd-4773-ba6d-7e663bb1ea19-kube-api-access-g2brp\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.876417 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4684685c-78bd-4773-ba6d-7e663bb1ea19-combined-ca-bundle\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.876511 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc68545b-8e7a-4b48-86f1-86b5e188672d-config-data-custom\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.876648 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4684685c-78bd-4773-ba6d-7e663bb1ea19-config-data-custom\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.876813 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.876898 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f28400b-b007-4ee5-a8fb-1aba7192e49f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.906691 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-scripts" (OuterVolumeSpecName: "scripts") pod "0f28400b-b007-4ee5-a8fb-1aba7192e49f" (UID: "0f28400b-b007-4ee5-a8fb-1aba7192e49f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.908135 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f28400b-b007-4ee5-a8fb-1aba7192e49f-kube-api-access-r9zw7" (OuterVolumeSpecName: "kube-api-access-r9zw7") pod "0f28400b-b007-4ee5-a8fb-1aba7192e49f" (UID: "0f28400b-b007-4ee5-a8fb-1aba7192e49f"). InnerVolumeSpecName "kube-api-access-r9zw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.938391 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0f28400b-b007-4ee5-a8fb-1aba7192e49f" (UID: "0f28400b-b007-4ee5-a8fb-1aba7192e49f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.972934 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-798d9b5844-wcjfg"] Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.974781 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.981057 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.981570 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.981638 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-config\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.981679 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc68545b-8e7a-4b48-86f1-86b5e188672d-config-data\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.981699 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.981724 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2brp\" (UniqueName: \"kubernetes.io/projected/4684685c-78bd-4773-ba6d-7e663bb1ea19-kube-api-access-g2brp\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.981759 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4684685c-78bd-4773-ba6d-7e663bb1ea19-combined-ca-bundle\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.981791 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc68545b-8e7a-4b48-86f1-86b5e188672d-config-data-custom\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.981840 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4684685c-78bd-4773-ba6d-7e663bb1ea19-config-data-custom\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.982854 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-sb\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983020 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4684685c-78bd-4773-ba6d-7e663bb1ea19-config-data\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983033 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-nb\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983058 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxqnw\" (UniqueName: \"kubernetes.io/projected/fc68545b-8e7a-4b48-86f1-86b5e188672d-kube-api-access-zxqnw\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983159 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxbmx\" (UniqueName: \"kubernetes.io/projected/fc11f864-e186-4067-a521-1357781f6e78-kube-api-access-jxbmx\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983255 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc68545b-8e7a-4b48-86f1-86b5e188672d-logs\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983299 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc68545b-8e7a-4b48-86f1-86b5e188672d-combined-ca-bundle\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983372 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4684685c-78bd-4773-ba6d-7e663bb1ea19-logs\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983479 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-dns-svc\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983621 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-config\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983693 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9zw7\" (UniqueName: \"kubernetes.io/projected/0f28400b-b007-4ee5-a8fb-1aba7192e49f-kube-api-access-r9zw7\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983721 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983741 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.983933 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4684685c-78bd-4773-ba6d-7e663bb1ea19-logs\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.984010 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f28400b-b007-4ee5-a8fb-1aba7192e49f" (UID: "0f28400b-b007-4ee5-a8fb-1aba7192e49f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.985007 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-dns-svc\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.989879 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-798d9b5844-wcjfg"] Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.990302 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc68545b-8e7a-4b48-86f1-86b5e188672d-logs\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:31 crc kubenswrapper[4895]: I0129 16:30:31.991471 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc68545b-8e7a-4b48-86f1-86b5e188672d-config-data\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.005712 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4684685c-78bd-4773-ba6d-7e663bb1ea19-config-data-custom\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.005810 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc68545b-8e7a-4b48-86f1-86b5e188672d-combined-ca-bundle\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.008325 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4684685c-78bd-4773-ba6d-7e663bb1ea19-combined-ca-bundle\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.012455 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4684685c-78bd-4773-ba6d-7e663bb1ea19-config-data\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.013515 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxqnw\" (UniqueName: \"kubernetes.io/projected/fc68545b-8e7a-4b48-86f1-86b5e188672d-kube-api-access-zxqnw\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.014150 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc68545b-8e7a-4b48-86f1-86b5e188672d-config-data-custom\") pod \"barbican-keystone-listener-556f55978-6k8pl\" (UID: \"fc68545b-8e7a-4b48-86f1-86b5e188672d\") " pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.019515 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxbmx\" (UniqueName: \"kubernetes.io/projected/fc11f864-e186-4067-a521-1357781f6e78-kube-api-access-jxbmx\") pod \"dnsmasq-dns-7f46f79845-sqc4l\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.026754 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2brp\" (UniqueName: \"kubernetes.io/projected/4684685c-78bd-4773-ba6d-7e663bb1ea19-kube-api-access-g2brp\") pod \"barbican-worker-6b5b766675-prdvb\" (UID: \"4684685c-78bd-4773-ba6d-7e663bb1ea19\") " pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.034854 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-config-data" (OuterVolumeSpecName: "config-data") pod "0f28400b-b007-4ee5-a8fb-1aba7192e49f" (UID: "0f28400b-b007-4ee5-a8fb-1aba7192e49f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.086325 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-combined-ca-bundle\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.086474 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data-custom\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.086528 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6shbk\" (UniqueName: \"kubernetes.io/projected/8546502f-24d9-407c-86ef-c12e9ccb70e4-kube-api-access-6shbk\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.086600 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8546502f-24d9-407c-86ef-c12e9ccb70e4-logs\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.086638 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.086727 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.086741 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f28400b-b007-4ee5-a8fb-1aba7192e49f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.184052 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b5b766675-prdvb" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.190168 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.190794 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8546502f-24d9-407c-86ef-c12e9ccb70e4-logs\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.190880 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.190956 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-combined-ca-bundle\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.191055 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data-custom\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.191107 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6shbk\" (UniqueName: \"kubernetes.io/projected/8546502f-24d9-407c-86ef-c12e9ccb70e4-kube-api-access-6shbk\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.191291 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8546502f-24d9-407c-86ef-c12e9ccb70e4-logs\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.196315 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-combined-ca-bundle\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.197024 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data-custom\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.197342 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.216663 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6shbk\" (UniqueName: \"kubernetes.io/projected/8546502f-24d9-407c-86ef-c12e9ccb70e4-kube-api-access-6shbk\") pod \"barbican-api-798d9b5844-wcjfg\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.289475 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-556f55978-6k8pl" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.320423 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.382390 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bb8677c6-sfxjh" event={"ID":"c992c27a-1165-4c22-99e3-67bee151dfb4","Type":"ContainerStarted","Data":"5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b"} Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.382480 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bb8677c6-sfxjh" event={"ID":"c992c27a-1165-4c22-99e3-67bee151dfb4","Type":"ContainerStarted","Data":"86e76405b100d8204ab01146ffc67dfdea00cc682573acaea96a3a84080177f1"} Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.406696 4895 generic.go:334] "Generic (PLEG): container finished" podID="47b279e9-73f3-444b-bf95-d97f9cc546ae" containerID="aa4f271dba09a2a5626c37ee9688bb9605e2bb4d346d2a7baeb8e10905204595" exitCode=0 Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.407294 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-n64hc" event={"ID":"47b279e9-73f3-444b-bf95-d97f9cc546ae","Type":"ContainerDied","Data":"aa4f271dba09a2a5626c37ee9688bb9605e2bb4d346d2a7baeb8e10905204595"} Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.407337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-n64hc" event={"ID":"47b279e9-73f3-444b-bf95-d97f9cc546ae","Type":"ContainerDied","Data":"e7c0dd15645127743d1ae883e0e7720ae18bb5f6f778df9eeba204483cbd3d0f"} Jan 29 16:30:32 crc kubenswrapper[4895]: I0129 16:30:32.407348 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7c0dd15645127743d1ae883e0e7720ae18bb5f6f778df9eeba204483cbd3d0f" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.433117 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f28400b-b007-4ee5-a8fb-1aba7192e49f","Type":"ContainerDied","Data":"e00af9f4e4c2cd87e8aa0d06d2125381e73f20b6cb4b3aaf583a70691afcb972"} Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.433192 4895 scope.go:117] "RemoveContainer" containerID="8f199e65908344d6eee44c53800895df4e18a5dda3d49ec94347e19eedda7bff" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.433378 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.453841 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f897d48fd-hgqsw" event={"ID":"1971cd12-642f-4a58-917d-4dda4953854a","Type":"ContainerStarted","Data":"1edcb26c43ad6053d6a602a102b2d3be9a5e95de38ece59b2b30dcbf4d398b2e"} Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.453912 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f897d48fd-hgqsw" event={"ID":"1971cd12-642f-4a58-917d-4dda4953854a","Type":"ContainerStarted","Data":"a1ba719aab400208180796b6113689ec81d9c8662f40b68ddaf9f736f2c4a94e"} Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.508363 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7f897d48fd-hgqsw" podStartSLOduration=2.508344717 podStartE2EDuration="2.508344717s" podCreationTimestamp="2026-01-29 16:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:32.507177805 +0000 UTC m=+1116.310155079" watchObservedRunningTime="2026-01-29 16:30:32.508344717 +0000 UTC m=+1116.311321981" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.569966 4895 scope.go:117] "RemoveContainer" containerID="76d21573eebe5cf08b57bcc3beaa9fc60df4fd775279f37dd755c76bc85ebc5c" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.605811 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.644610 4895 scope.go:117] "RemoveContainer" containerID="b75c88295c405fd70ac7a4d8e86b83a727b5b8002469318151dfd82bf2467b99" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.661432 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.692683 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.719694 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-dns-svc\") pod \"47b279e9-73f3-444b-bf95-d97f9cc546ae\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.719751 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-sb\") pod \"47b279e9-73f3-444b-bf95-d97f9cc546ae\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.719955 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-config\") pod \"47b279e9-73f3-444b-bf95-d97f9cc546ae\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.719999 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/47b279e9-73f3-444b-bf95-d97f9cc546ae-kube-api-access-zdxrl\") pod \"47b279e9-73f3-444b-bf95-d97f9cc546ae\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.720038 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-nb\") pod \"47b279e9-73f3-444b-bf95-d97f9cc546ae\" (UID: \"47b279e9-73f3-444b-bf95-d97f9cc546ae\") " Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.727224 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:30:33 crc kubenswrapper[4895]: E0129 16:30:32.727726 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b279e9-73f3-444b-bf95-d97f9cc546ae" containerName="dnsmasq-dns" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.727741 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b279e9-73f3-444b-bf95-d97f9cc546ae" containerName="dnsmasq-dns" Jan 29 16:30:33 crc kubenswrapper[4895]: E0129 16:30:32.727762 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b279e9-73f3-444b-bf95-d97f9cc546ae" containerName="init" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.727772 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b279e9-73f3-444b-bf95-d97f9cc546ae" containerName="init" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.727975 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b279e9-73f3-444b-bf95-d97f9cc546ae" containerName="dnsmasq-dns" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.744028 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.746286 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b279e9-73f3-444b-bf95-d97f9cc546ae-kube-api-access-zdxrl" (OuterVolumeSpecName: "kube-api-access-zdxrl") pod "47b279e9-73f3-444b-bf95-d97f9cc546ae" (UID: "47b279e9-73f3-444b-bf95-d97f9cc546ae"). InnerVolumeSpecName "kube-api-access-zdxrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.749624 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.749909 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.760255 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.806100 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47b279e9-73f3-444b-bf95-d97f9cc546ae" (UID: "47b279e9-73f3-444b-bf95-d97f9cc546ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.823011 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-config" (OuterVolumeSpecName: "config") pod "47b279e9-73f3-444b-bf95-d97f9cc546ae" (UID: "47b279e9-73f3-444b-bf95-d97f9cc546ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.823687 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-log-httpd\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.823762 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.823785 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-config-data\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.823830 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-run-httpd\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.823923 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-scripts\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.823941 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6djj\" (UniqueName: \"kubernetes.io/projected/08ccd04e-f148-46a4-88aa-b488fa132756-kube-api-access-m6djj\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.823991 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.824051 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.824062 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/47b279e9-73f3-444b-bf95-d97f9cc546ae-kube-api-access-zdxrl\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.824074 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.826677 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47b279e9-73f3-444b-bf95-d97f9cc546ae" (UID: "47b279e9-73f3-444b-bf95-d97f9cc546ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.840324 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47b279e9-73f3-444b-bf95-d97f9cc546ae" (UID: "47b279e9-73f3-444b-bf95-d97f9cc546ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.926595 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.926652 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-config-data\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.926700 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-run-httpd\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.926779 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-scripts\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.926800 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6djj\" (UniqueName: \"kubernetes.io/projected/08ccd04e-f148-46a4-88aa-b488fa132756-kube-api-access-m6djj\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.926831 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.926964 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-log-httpd\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.927028 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.927043 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47b279e9-73f3-444b-bf95-d97f9cc546ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.927627 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-log-httpd\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.930743 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-run-httpd\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.940725 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.940971 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.941230 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-config-data\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.941450 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-scripts\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:32.948191 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6djj\" (UniqueName: \"kubernetes.io/projected/08ccd04e-f148-46a4-88aa-b488fa132756-kube-api-access-m6djj\") pod \"ceilometer-0\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.049311 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f28400b-b007-4ee5-a8fb-1aba7192e49f" path="/var/lib/kubelet/pods/0f28400b-b007-4ee5-a8fb-1aba7192e49f/volumes" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.116645 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.469069 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-n64hc" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.469259 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bb8677c6-sfxjh" event={"ID":"c992c27a-1165-4c22-99e3-67bee151dfb4","Type":"ContainerStarted","Data":"73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0"} Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.469632 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.469661 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.469674 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.525280 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-84bb8677c6-sfxjh" podStartSLOduration=3.525243779 podStartE2EDuration="3.525243779s" podCreationTimestamp="2026-01-29 16:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:33.49910975 +0000 UTC m=+1117.302087074" watchObservedRunningTime="2026-01-29 16:30:33.525243779 +0000 UTC m=+1117.328221063" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.538339 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-n64hc"] Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.564157 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-n64hc"] Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.734136 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.736249 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b5b766675-prdvb"] Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.873117 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfhfh\" (UniqueName: \"kubernetes.io/projected/257f0f91-6612-425d-9cff-50bc99ca7979-kube-api-access-zfhfh\") pod \"257f0f91-6612-425d-9cff-50bc99ca7979\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.873593 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-combined-ca-bundle\") pod \"257f0f91-6612-425d-9cff-50bc99ca7979\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.873684 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-config\") pod \"257f0f91-6612-425d-9cff-50bc99ca7979\" (UID: \"257f0f91-6612-425d-9cff-50bc99ca7979\") " Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.882209 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257f0f91-6612-425d-9cff-50bc99ca7979-kube-api-access-zfhfh" (OuterVolumeSpecName: "kube-api-access-zfhfh") pod "257f0f91-6612-425d-9cff-50bc99ca7979" (UID: "257f0f91-6612-425d-9cff-50bc99ca7979"). InnerVolumeSpecName "kube-api-access-zfhfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.935436 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-config" (OuterVolumeSpecName: "config") pod "257f0f91-6612-425d-9cff-50bc99ca7979" (UID: "257f0f91-6612-425d-9cff-50bc99ca7979"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.938670 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "257f0f91-6612-425d-9cff-50bc99ca7979" (UID: "257f0f91-6612-425d-9cff-50bc99ca7979"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.976726 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.976772 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfhfh\" (UniqueName: \"kubernetes.io/projected/257f0f91-6612-425d-9cff-50bc99ca7979-kube-api-access-zfhfh\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:33 crc kubenswrapper[4895]: I0129 16:30:33.976791 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257f0f91-6612-425d-9cff-50bc99ca7979-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.050075 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.057138 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-798d9b5844-wcjfg"] Jan 29 16:30:34 crc kubenswrapper[4895]: W0129 16:30:34.057737 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08ccd04e_f148_46a4_88aa_b488fa132756.slice/crio-8287769d17f192ca4c2ffa30bfababfd0c9e4a4cb7c850ce8b611d9bcc3ad4c8 WatchSource:0}: Error finding container 8287769d17f192ca4c2ffa30bfababfd0c9e4a4cb7c850ce8b611d9bcc3ad4c8: Status 404 returned error can't find the container with id 8287769d17f192ca4c2ffa30bfababfd0c9e4a4cb7c850ce8b611d9bcc3ad4c8 Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.081299 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-556f55978-6k8pl"] Jan 29 16:30:34 crc kubenswrapper[4895]: W0129 16:30:34.097497 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc68545b_8e7a_4b48_86f1_86b5e188672d.slice/crio-ec811b6b1cda2ad2086e9bf31cf1754216d84fac840227018d4de0fbabedfe14 WatchSource:0}: Error finding container ec811b6b1cda2ad2086e9bf31cf1754216d84fac840227018d4de0fbabedfe14: Status 404 returned error can't find the container with id ec811b6b1cda2ad2086e9bf31cf1754216d84fac840227018d4de0fbabedfe14 Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.102022 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-sqc4l"] Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.499754 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bv8g6" event={"ID":"257f0f91-6612-425d-9cff-50bc99ca7979","Type":"ContainerDied","Data":"1c7414b7e594baacd14011088f847ef211c2cddda1d16a0091bd722a15756aee"} Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.500248 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c7414b7e594baacd14011088f847ef211c2cddda1d16a0091bd722a15756aee" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.502015 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bv8g6" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.502353 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-556f55978-6k8pl" event={"ID":"fc68545b-8e7a-4b48-86f1-86b5e188672d","Type":"ContainerStarted","Data":"ec811b6b1cda2ad2086e9bf31cf1754216d84fac840227018d4de0fbabedfe14"} Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.503735 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08ccd04e-f148-46a4-88aa-b488fa132756","Type":"ContainerStarted","Data":"8287769d17f192ca4c2ffa30bfababfd0c9e4a4cb7c850ce8b611d9bcc3ad4c8"} Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.508718 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798d9b5844-wcjfg" event={"ID":"8546502f-24d9-407c-86ef-c12e9ccb70e4","Type":"ContainerStarted","Data":"259772f56e79b5d343ef4f40e04252fbc022d06e2dae2c581ff2d1827a751cb9"} Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.518592 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" event={"ID":"fc11f864-e186-4067-a521-1357781f6e78","Type":"ContainerStarted","Data":"8ab77443c963ff0db02ca52d0c85f3562d38c1d5ae571f52b05309fdd053329a"} Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.524008 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b5b766675-prdvb" event={"ID":"4684685c-78bd-4773-ba6d-7e663bb1ea19","Type":"ContainerStarted","Data":"0b9b577fc95fe080a8127bce097d9d18ea04e23780d63e9085ee4d426941d74b"} Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.693211 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-68f56c8b56-xwgv2"] Jan 29 16:30:34 crc kubenswrapper[4895]: E0129 16:30:34.693721 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257f0f91-6612-425d-9cff-50bc99ca7979" containerName="neutron-db-sync" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.693746 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="257f0f91-6612-425d-9cff-50bc99ca7979" containerName="neutron-db-sync" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.693993 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="257f0f91-6612-425d-9cff-50bc99ca7979" containerName="neutron-db-sync" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.695165 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.705792 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-68f56c8b56-xwgv2"] Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.707713 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.708044 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.791235 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-internal-tls-certs\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.791647 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbbst\" (UniqueName: \"kubernetes.io/projected/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-kube-api-access-nbbst\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.791688 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-config-data\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.791780 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-config-data-custom\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.791812 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-public-tls-certs\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.791852 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-logs\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.791892 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-combined-ca-bundle\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.898311 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-internal-tls-certs\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.898372 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbbst\" (UniqueName: \"kubernetes.io/projected/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-kube-api-access-nbbst\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.898405 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-config-data\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.898458 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-config-data-custom\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.898487 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-public-tls-certs\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.898609 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-logs\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.898629 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-combined-ca-bundle\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.902168 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-logs\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.904035 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-combined-ca-bundle\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.904429 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-config-data\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.918562 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-internal-tls-certs\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.919111 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-public-tls-certs\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.924477 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-config-data-custom\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.934719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbbst\" (UniqueName: \"kubernetes.io/projected/02fc8cd9-5a26-4ca0-9a6b-f70458ed2977-kube-api-access-nbbst\") pod \"barbican-api-68f56c8b56-xwgv2\" (UID: \"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977\") " pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:34 crc kubenswrapper[4895]: I0129 16:30:34.966125 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-sqc4l"] Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.021497 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.034710 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fvrcj"] Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.036463 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.085069 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47b279e9-73f3-444b-bf95-d97f9cc546ae" path="/var/lib/kubelet/pods/47b279e9-73f3-444b-bf95-d97f9cc546ae/volumes" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.085821 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fvrcj"] Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.103463 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.103612 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rkck\" (UniqueName: \"kubernetes.io/projected/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-kube-api-access-6rkck\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.103665 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-config\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.103682 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.103722 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-dns-svc\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.234400 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.234877 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rkck\" (UniqueName: \"kubernetes.io/projected/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-kube-api-access-6rkck\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.234925 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-config\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.234951 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.234989 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-dns-svc\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.237173 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-dns-svc\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.237752 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.243112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-config\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.243833 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.302408 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rkck\" (UniqueName: \"kubernetes.io/projected/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-kube-api-access-6rkck\") pod \"dnsmasq-dns-869f779d85-fvrcj\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.333303 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77cbdff676-qn2gg"] Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.339912 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.355969 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.356381 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.356691 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.356949 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-c444h" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.384652 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77cbdff676-qn2gg"] Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.440990 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.446681 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-config\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.447127 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-combined-ca-bundle\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.447253 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-httpd-config\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.447370 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-ovndb-tls-certs\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.447508 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4nm4\" (UniqueName: \"kubernetes.io/projected/776df2d4-3174-4c65-9966-658b00bc63fa-kube-api-access-b4nm4\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.553254 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-httpd-config\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.553363 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-ovndb-tls-certs\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.553424 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4nm4\" (UniqueName: \"kubernetes.io/projected/776df2d4-3174-4c65-9966-658b00bc63fa-kube-api-access-b4nm4\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.553461 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-config\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.553487 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-combined-ca-bundle\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.568491 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-ovndb-tls-certs\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.569668 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-httpd-config\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.581657 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-combined-ca-bundle\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.582366 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-config\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.587649 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4nm4\" (UniqueName: \"kubernetes.io/projected/776df2d4-3174-4c65-9966-658b00bc63fa-kube-api-access-b4nm4\") pod \"neutron-77cbdff676-qn2gg\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.591068 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798d9b5844-wcjfg" event={"ID":"8546502f-24d9-407c-86ef-c12e9ccb70e4","Type":"ContainerStarted","Data":"706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c"} Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.591127 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798d9b5844-wcjfg" event={"ID":"8546502f-24d9-407c-86ef-c12e9ccb70e4","Type":"ContainerStarted","Data":"5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3"} Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.591371 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.591456 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.598580 4895 generic.go:334] "Generic (PLEG): container finished" podID="fc11f864-e186-4067-a521-1357781f6e78" containerID="507555e569ee9907fa151edda2d0e9f37de904f696a7dc9108d1a5f593a8ee1a" exitCode=0 Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.598652 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" event={"ID":"fc11f864-e186-4067-a521-1357781f6e78","Type":"ContainerDied","Data":"507555e569ee9907fa151edda2d0e9f37de904f696a7dc9108d1a5f593a8ee1a"} Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.633910 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-798d9b5844-wcjfg" podStartSLOduration=4.633886203 podStartE2EDuration="4.633886203s" podCreationTimestamp="2026-01-29 16:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.629542206 +0000 UTC m=+1119.432519470" watchObservedRunningTime="2026-01-29 16:30:35.633886203 +0000 UTC m=+1119.436863477" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.675940 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:35 crc kubenswrapper[4895]: I0129 16:30:35.953899 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-68f56c8b56-xwgv2"] Jan 29 16:30:36 crc kubenswrapper[4895]: E0129 16:30:36.024542 4895 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 29 16:30:36 crc kubenswrapper[4895]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/fc11f864-e186-4067-a521-1357781f6e78/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 29 16:30:36 crc kubenswrapper[4895]: > podSandboxID="8ab77443c963ff0db02ca52d0c85f3562d38c1d5ae571f52b05309fdd053329a" Jan 29 16:30:36 crc kubenswrapper[4895]: E0129 16:30:36.024727 4895 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 16:30:36 crc kubenswrapper[4895]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56fh88h689h75h5f9h595h5c6h5fbhdh54bh5f7h94h89hd9hd7h65bh655h5bdh649hcch594h696h685hf9h54dh5dch57fh556h5ddh57dh556h665q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jxbmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7f46f79845-sqc4l_openstack(fc11f864-e186-4067-a521-1357781f6e78): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/fc11f864-e186-4067-a521-1357781f6e78/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 29 16:30:36 crc kubenswrapper[4895]: > logger="UnhandledError" Jan 29 16:30:36 crc kubenswrapper[4895]: E0129 16:30:36.028400 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/fc11f864-e186-4067-a521-1357781f6e78/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" podUID="fc11f864-e186-4067-a521-1357781f6e78" Jan 29 16:30:36 crc kubenswrapper[4895]: I0129 16:30:36.144945 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fvrcj"] Jan 29 16:30:36 crc kubenswrapper[4895]: W0129 16:30:36.156574 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f1b52b_d5cc_4dfe_9fe8_709c3d996048.slice/crio-fe334fd05b21e5375dbf9f3643c540dac27aaede1760180b645d73e7e5f7704d WatchSource:0}: Error finding container fe334fd05b21e5375dbf9f3643c540dac27aaede1760180b645d73e7e5f7704d: Status 404 returned error can't find the container with id fe334fd05b21e5375dbf9f3643c540dac27aaede1760180b645d73e7e5f7704d Jan 29 16:30:36 crc kubenswrapper[4895]: I0129 16:30:36.616587 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68f56c8b56-xwgv2" event={"ID":"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977","Type":"ContainerStarted","Data":"5158928dbfc6ce3a4cf89e05f1fad8e1c7ccbfb47cb1d85a30a1c98e97bb31e4"} Jan 29 16:30:36 crc kubenswrapper[4895]: I0129 16:30:36.616979 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68f56c8b56-xwgv2" event={"ID":"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977","Type":"ContainerStarted","Data":"77ed563f1265d385c0d81ca93f603be091107fa39ba9179ec67f5fd3e642bfce"} Jan 29 16:30:36 crc kubenswrapper[4895]: I0129 16:30:36.618289 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08ccd04e-f148-46a4-88aa-b488fa132756","Type":"ContainerStarted","Data":"b9fa803e3d4c42cf9e03a8f27fd12bf27796d201742881dfb7f6ed8aeb6998e5"} Jan 29 16:30:36 crc kubenswrapper[4895]: I0129 16:30:36.621303 4895 generic.go:334] "Generic (PLEG): container finished" podID="01f1b52b-d5cc-4dfe-9fe8-709c3d996048" containerID="f999ac46b7f073a1167b9d590f67f5ec53429367096698286d558468e230f284" exitCode=0 Jan 29 16:30:36 crc kubenswrapper[4895]: I0129 16:30:36.621465 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" event={"ID":"01f1b52b-d5cc-4dfe-9fe8-709c3d996048","Type":"ContainerDied","Data":"f999ac46b7f073a1167b9d590f67f5ec53429367096698286d558468e230f284"} Jan 29 16:30:36 crc kubenswrapper[4895]: I0129 16:30:36.621517 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" event={"ID":"01f1b52b-d5cc-4dfe-9fe8-709c3d996048","Type":"ContainerStarted","Data":"fe334fd05b21e5375dbf9f3643c540dac27aaede1760180b645d73e7e5f7704d"} Jan 29 16:30:36 crc kubenswrapper[4895]: I0129 16:30:36.715074 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77cbdff676-qn2gg"] Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.138180 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.288692 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxbmx\" (UniqueName: \"kubernetes.io/projected/fc11f864-e186-4067-a521-1357781f6e78-kube-api-access-jxbmx\") pod \"fc11f864-e186-4067-a521-1357781f6e78\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.288755 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-sb\") pod \"fc11f864-e186-4067-a521-1357781f6e78\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.288858 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-dns-svc\") pod \"fc11f864-e186-4067-a521-1357781f6e78\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.289061 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-config\") pod \"fc11f864-e186-4067-a521-1357781f6e78\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.289113 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-nb\") pod \"fc11f864-e186-4067-a521-1357781f6e78\" (UID: \"fc11f864-e186-4067-a521-1357781f6e78\") " Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.295082 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc11f864-e186-4067-a521-1357781f6e78-kube-api-access-jxbmx" (OuterVolumeSpecName: "kube-api-access-jxbmx") pod "fc11f864-e186-4067-a521-1357781f6e78" (UID: "fc11f864-e186-4067-a521-1357781f6e78"). InnerVolumeSpecName "kube-api-access-jxbmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.372209 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc11f864-e186-4067-a521-1357781f6e78" (UID: "fc11f864-e186-4067-a521-1357781f6e78"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.372261 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-config" (OuterVolumeSpecName: "config") pod "fc11f864-e186-4067-a521-1357781f6e78" (UID: "fc11f864-e186-4067-a521-1357781f6e78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.386233 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fc11f864-e186-4067-a521-1357781f6e78" (UID: "fc11f864-e186-4067-a521-1357781f6e78"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.386365 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fc11f864-e186-4067-a521-1357781f6e78" (UID: "fc11f864-e186-4067-a521-1357781f6e78"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.391452 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.391488 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.391501 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxbmx\" (UniqueName: \"kubernetes.io/projected/fc11f864-e186-4067-a521-1357781f6e78-kube-api-access-jxbmx\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.391510 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.391522 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc11f864-e186-4067-a521-1357781f6e78-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.647240 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.647273 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f46f79845-sqc4l" event={"ID":"fc11f864-e186-4067-a521-1357781f6e78","Type":"ContainerDied","Data":"8ab77443c963ff0db02ca52d0c85f3562d38c1d5ae571f52b05309fdd053329a"} Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.647392 4895 scope.go:117] "RemoveContainer" containerID="507555e569ee9907fa151edda2d0e9f37de904f696a7dc9108d1a5f593a8ee1a" Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.650761 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbdff676-qn2gg" event={"ID":"776df2d4-3174-4c65-9966-658b00bc63fa","Type":"ContainerStarted","Data":"1c3361369e1b0929b0be94e19a539fc53746816169ced2cbea6fdc781bef7fa1"} Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.761717 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-sqc4l"] Jan 29 16:30:37 crc kubenswrapper[4895]: I0129 16:30:37.771154 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f46f79845-sqc4l"] Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.425312 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d896887bc-zxx6j"] Jan 29 16:30:38 crc kubenswrapper[4895]: E0129 16:30:38.426700 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc11f864-e186-4067-a521-1357781f6e78" containerName="init" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.426821 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc11f864-e186-4067-a521-1357781f6e78" containerName="init" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.427108 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc11f864-e186-4067-a521-1357781f6e78" containerName="init" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.428485 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.431330 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.432149 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.445178 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d896887bc-zxx6j"] Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.516104 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rrtl\" (UniqueName: \"kubernetes.io/projected/658f67cc-4c62-4b84-9fba-60e98ece6389-kube-api-access-4rrtl\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.516196 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-combined-ca-bundle\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.516247 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-public-tls-certs\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.516283 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-ovndb-tls-certs\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.516379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-config\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.516416 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-httpd-config\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.516498 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-internal-tls-certs\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.618373 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-ovndb-tls-certs\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.618845 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-config\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.618879 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-httpd-config\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.618910 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-internal-tls-certs\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.619004 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rrtl\" (UniqueName: \"kubernetes.io/projected/658f67cc-4c62-4b84-9fba-60e98ece6389-kube-api-access-4rrtl\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.619042 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-combined-ca-bundle\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.619073 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-public-tls-certs\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.628075 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-public-tls-certs\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.631686 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-combined-ca-bundle\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.633482 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-ovndb-tls-certs\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.633796 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-config\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.635256 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-internal-tls-certs\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.636825 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/658f67cc-4c62-4b84-9fba-60e98ece6389-httpd-config\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.640929 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rrtl\" (UniqueName: \"kubernetes.io/projected/658f67cc-4c62-4b84-9fba-60e98ece6389-kube-api-access-4rrtl\") pod \"neutron-6d896887bc-zxx6j\" (UID: \"658f67cc-4c62-4b84-9fba-60e98ece6389\") " pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.662157 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b5b766675-prdvb" event={"ID":"4684685c-78bd-4773-ba6d-7e663bb1ea19","Type":"ContainerStarted","Data":"8c1b2c1930ec47c41652fcbbebe5b45e1d57d5008a4949c21892375360621707"} Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.662222 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b5b766675-prdvb" event={"ID":"4684685c-78bd-4773-ba6d-7e663bb1ea19","Type":"ContainerStarted","Data":"197c4ddda708a9f4e5d903e484c9c6ee61303941a4e94741252972c9f6f12e53"} Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.667772 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbdff676-qn2gg" event={"ID":"776df2d4-3174-4c65-9966-658b00bc63fa","Type":"ContainerStarted","Data":"b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7"} Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.667847 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.667883 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbdff676-qn2gg" event={"ID":"776df2d4-3174-4c65-9966-658b00bc63fa","Type":"ContainerStarted","Data":"59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8"} Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.670898 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fv5s" event={"ID":"01eead73-2722-45a1-a5f1-fa4522c0041b","Type":"ContainerStarted","Data":"769bf513924f314d7ab909e795ad11a1fbd5bf38330c8079d419322f03184dfa"} Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.677459 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68f56c8b56-xwgv2" event={"ID":"02fc8cd9-5a26-4ca0-9a6b-f70458ed2977","Type":"ContainerStarted","Data":"ece1605d562cc07e01a06ec9c57d8eec8fc308f4f495040ccf6fa1c0121d1d5e"} Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.678792 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.678839 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.695322 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-556f55978-6k8pl" event={"ID":"fc68545b-8e7a-4b48-86f1-86b5e188672d","Type":"ContainerStarted","Data":"40de6504f27d082a84b740ca7624c95caf3eb9fea448550a13e768af4742a712"} Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.695396 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-556f55978-6k8pl" event={"ID":"fc68545b-8e7a-4b48-86f1-86b5e188672d","Type":"ContainerStarted","Data":"c14423d51b6da665bd0e775b35208dd92d53dcd1877d1b7db7a8875ff801a320"} Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.713656 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08ccd04e-f148-46a4-88aa-b488fa132756","Type":"ContainerStarted","Data":"1f4a20ba9633695a4901428142322837d95b0c1f44f62ff5a276fbf21c2accf4"} Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.722203 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6b5b766675-prdvb" podStartSLOduration=4.21568255 podStartE2EDuration="7.72217223s" podCreationTimestamp="2026-01-29 16:30:31 +0000 UTC" firstStartedPulling="2026-01-29 16:30:33.736316184 +0000 UTC m=+1117.539293448" lastFinishedPulling="2026-01-29 16:30:37.242805874 +0000 UTC m=+1121.045783128" observedRunningTime="2026-01-29 16:30:38.695593749 +0000 UTC m=+1122.498571033" watchObservedRunningTime="2026-01-29 16:30:38.72217223 +0000 UTC m=+1122.525149484" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.738391 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" event={"ID":"01f1b52b-d5cc-4dfe-9fe8-709c3d996048","Type":"ContainerStarted","Data":"8e4dc383e87e986691717d7c40b793926c59af372a1332a6a4e0b4c2d831d90f"} Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.738565 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.751231 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.754334 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-68f56c8b56-xwgv2" podStartSLOduration=4.754300951 podStartE2EDuration="4.754300951s" podCreationTimestamp="2026-01-29 16:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:38.726266471 +0000 UTC m=+1122.529243755" watchObservedRunningTime="2026-01-29 16:30:38.754300951 +0000 UTC m=+1122.557278215" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.764792 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6fv5s" podStartSLOduration=3.837036805 podStartE2EDuration="42.764762455s" podCreationTimestamp="2026-01-29 16:29:56 +0000 UTC" firstStartedPulling="2026-01-29 16:29:58.359930651 +0000 UTC m=+1082.162907905" lastFinishedPulling="2026-01-29 16:30:37.287656291 +0000 UTC m=+1121.090633555" observedRunningTime="2026-01-29 16:30:38.750626582 +0000 UTC m=+1122.553603846" watchObservedRunningTime="2026-01-29 16:30:38.764762455 +0000 UTC m=+1122.567739729" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.787978 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-556f55978-6k8pl" podStartSLOduration=4.646899386 podStartE2EDuration="7.787932193s" podCreationTimestamp="2026-01-29 16:30:31 +0000 UTC" firstStartedPulling="2026-01-29 16:30:34.101048087 +0000 UTC m=+1117.904025351" lastFinishedPulling="2026-01-29 16:30:37.242080894 +0000 UTC m=+1121.045058158" observedRunningTime="2026-01-29 16:30:38.776255597 +0000 UTC m=+1122.579232871" watchObservedRunningTime="2026-01-29 16:30:38.787932193 +0000 UTC m=+1122.590909487" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.812681 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77cbdff676-qn2gg" podStartSLOduration=3.812573512 podStartE2EDuration="3.812573512s" podCreationTimestamp="2026-01-29 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:38.803392553 +0000 UTC m=+1122.606369827" watchObservedRunningTime="2026-01-29 16:30:38.812573512 +0000 UTC m=+1122.615550776" Jan 29 16:30:38 crc kubenswrapper[4895]: I0129 16:30:38.903819 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" podStartSLOduration=4.903714215 podStartE2EDuration="4.903714215s" podCreationTimestamp="2026-01-29 16:30:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:38.895188453 +0000 UTC m=+1122.698165717" watchObservedRunningTime="2026-01-29 16:30:38.903714215 +0000 UTC m=+1122.706691479" Jan 29 16:30:39 crc kubenswrapper[4895]: I0129 16:30:39.074885 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc11f864-e186-4067-a521-1357781f6e78" path="/var/lib/kubelet/pods/fc11f864-e186-4067-a521-1357781f6e78/volumes" Jan 29 16:30:39 crc kubenswrapper[4895]: E0129 16:30:39.239435 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:30:39 crc kubenswrapper[4895]: E0129 16:30:39.240081 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6djj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(08ccd04e-f148-46a4-88aa-b488fa132756): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:30:39 crc kubenswrapper[4895]: E0129 16:30:39.241883 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" Jan 29 16:30:39 crc kubenswrapper[4895]: I0129 16:30:39.273330 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d896887bc-zxx6j"] Jan 29 16:30:39 crc kubenswrapper[4895]: I0129 16:30:39.754020 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08ccd04e-f148-46a4-88aa-b488fa132756","Type":"ContainerStarted","Data":"20be649c4a0ab5c5af769994c74bc4f918b8f4f6dc2bc2bf975a8d473dbd06ad"} Jan 29 16:30:39 crc kubenswrapper[4895]: E0129 16:30:39.756263 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" Jan 29 16:30:39 crc kubenswrapper[4895]: I0129 16:30:39.770099 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d896887bc-zxx6j" event={"ID":"658f67cc-4c62-4b84-9fba-60e98ece6389","Type":"ContainerStarted","Data":"3e599127c7054234632e900b0fe3fcfcdcb9282ff4ddbb46649d69c533016d0d"} Jan 29 16:30:39 crc kubenswrapper[4895]: I0129 16:30:39.770433 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d896887bc-zxx6j" event={"ID":"658f67cc-4c62-4b84-9fba-60e98ece6389","Type":"ContainerStarted","Data":"32de28959aa65954367b871de0eda393540b9114b7ab99002a5471180525a8d4"} Jan 29 16:30:40 crc kubenswrapper[4895]: I0129 16:30:40.781687 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d896887bc-zxx6j" event={"ID":"658f67cc-4c62-4b84-9fba-60e98ece6389","Type":"ContainerStarted","Data":"44e661c2c2489cc12aeadd3b482b2ac5e73a3814b20f066a437aa59bc0722e06"} Jan 29 16:30:40 crc kubenswrapper[4895]: E0129 16:30:40.783677 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" Jan 29 16:30:40 crc kubenswrapper[4895]: I0129 16:30:40.844052 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d896887bc-zxx6j" podStartSLOduration=2.8440210219999997 podStartE2EDuration="2.844021022s" podCreationTimestamp="2026-01-29 16:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:40.835042669 +0000 UTC m=+1124.638019963" watchObservedRunningTime="2026-01-29 16:30:40.844021022 +0000 UTC m=+1124.646998316" Jan 29 16:30:41 crc kubenswrapper[4895]: I0129 16:30:41.795686 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:30:43 crc kubenswrapper[4895]: I0129 16:30:43.821597 4895 generic.go:334] "Generic (PLEG): container finished" podID="01eead73-2722-45a1-a5f1-fa4522c0041b" containerID="769bf513924f314d7ab909e795ad11a1fbd5bf38330c8079d419322f03184dfa" exitCode=0 Jan 29 16:30:43 crc kubenswrapper[4895]: I0129 16:30:43.822180 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fv5s" event={"ID":"01eead73-2722-45a1-a5f1-fa4522c0041b","Type":"ContainerDied","Data":"769bf513924f314d7ab909e795ad11a1fbd5bf38330c8079d419322f03184dfa"} Jan 29 16:30:43 crc kubenswrapper[4895]: I0129 16:30:43.902449 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:44 crc kubenswrapper[4895]: I0129 16:30:44.049952 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.354361 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.443157 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.486268 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz8mq\" (UniqueName: \"kubernetes.io/projected/01eead73-2722-45a1-a5f1-fa4522c0041b-kube-api-access-bz8mq\") pod \"01eead73-2722-45a1-a5f1-fa4522c0041b\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.486459 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-db-sync-config-data\") pod \"01eead73-2722-45a1-a5f1-fa4522c0041b\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.486529 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-scripts\") pod \"01eead73-2722-45a1-a5f1-fa4522c0041b\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.486579 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-config-data\") pod \"01eead73-2722-45a1-a5f1-fa4522c0041b\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.486607 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01eead73-2722-45a1-a5f1-fa4522c0041b-etc-machine-id\") pod \"01eead73-2722-45a1-a5f1-fa4522c0041b\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.486639 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-combined-ca-bundle\") pod \"01eead73-2722-45a1-a5f1-fa4522c0041b\" (UID: \"01eead73-2722-45a1-a5f1-fa4522c0041b\") " Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.486783 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01eead73-2722-45a1-a5f1-fa4522c0041b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "01eead73-2722-45a1-a5f1-fa4522c0041b" (UID: "01eead73-2722-45a1-a5f1-fa4522c0041b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.487030 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/01eead73-2722-45a1-a5f1-fa4522c0041b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.511059 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "01eead73-2722-45a1-a5f1-fa4522c0041b" (UID: "01eead73-2722-45a1-a5f1-fa4522c0041b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.511259 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-scripts" (OuterVolumeSpecName: "scripts") pod "01eead73-2722-45a1-a5f1-fa4522c0041b" (UID: "01eead73-2722-45a1-a5f1-fa4522c0041b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.513189 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01eead73-2722-45a1-a5f1-fa4522c0041b-kube-api-access-bz8mq" (OuterVolumeSpecName: "kube-api-access-bz8mq") pod "01eead73-2722-45a1-a5f1-fa4522c0041b" (UID: "01eead73-2722-45a1-a5f1-fa4522c0041b"). InnerVolumeSpecName "kube-api-access-bz8mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.525403 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-dml95"] Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.525726 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" podUID="d48aba20-5109-41c6-93ed-33b3e7536815" containerName="dnsmasq-dns" containerID="cri-o://3c6385b0b1bbda766365e1e435d883bc847654f869db3e4383e5c9fe53f98eaa" gracePeriod=10 Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.557034 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01eead73-2722-45a1-a5f1-fa4522c0041b" (UID: "01eead73-2722-45a1-a5f1-fa4522c0041b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.590485 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz8mq\" (UniqueName: \"kubernetes.io/projected/01eead73-2722-45a1-a5f1-fa4522c0041b-kube-api-access-bz8mq\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.591113 4895 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.591171 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.591220 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.600047 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-config-data" (OuterVolumeSpecName: "config-data") pod "01eead73-2722-45a1-a5f1-fa4522c0041b" (UID: "01eead73-2722-45a1-a5f1-fa4522c0041b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.694116 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01eead73-2722-45a1-a5f1-fa4522c0041b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.863823 4895 generic.go:334] "Generic (PLEG): container finished" podID="d48aba20-5109-41c6-93ed-33b3e7536815" containerID="3c6385b0b1bbda766365e1e435d883bc847654f869db3e4383e5c9fe53f98eaa" exitCode=0 Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.863995 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" event={"ID":"d48aba20-5109-41c6-93ed-33b3e7536815","Type":"ContainerDied","Data":"3c6385b0b1bbda766365e1e435d883bc847654f869db3e4383e5c9fe53f98eaa"} Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.866045 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6fv5s" event={"ID":"01eead73-2722-45a1-a5f1-fa4522c0041b","Type":"ContainerDied","Data":"99229e9c46cfeb2bc2d5034d347e2e75de9c5e2023a77fdb0545ff4a4ecf4ae9"} Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.866066 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99229e9c46cfeb2bc2d5034d347e2e75de9c5e2023a77fdb0545ff4a4ecf4ae9" Jan 29 16:30:45 crc kubenswrapper[4895]: I0129 16:30:45.866126 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6fv5s" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.167806 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:30:46 crc kubenswrapper[4895]: E0129 16:30:46.168296 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01eead73-2722-45a1-a5f1-fa4522c0041b" containerName="cinder-db-sync" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.168312 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="01eead73-2722-45a1-a5f1-fa4522c0041b" containerName="cinder-db-sync" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.168509 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="01eead73-2722-45a1-a5f1-fa4522c0041b" containerName="cinder-db-sync" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.169588 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.172848 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.173625 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cb56t" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.177178 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.177500 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.177774 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.196324 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.219382 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-sb\") pod \"d48aba20-5109-41c6-93ed-33b3e7536815\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.219621 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-nb\") pod \"d48aba20-5109-41c6-93ed-33b3e7536815\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.219686 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-dns-svc\") pod \"d48aba20-5109-41c6-93ed-33b3e7536815\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.219770 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz24j\" (UniqueName: \"kubernetes.io/projected/d48aba20-5109-41c6-93ed-33b3e7536815-kube-api-access-rz24j\") pod \"d48aba20-5109-41c6-93ed-33b3e7536815\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.220133 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-config\") pod \"d48aba20-5109-41c6-93ed-33b3e7536815\" (UID: \"d48aba20-5109-41c6-93ed-33b3e7536815\") " Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.220400 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.220467 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.220507 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.220556 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.220591 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmh7j\" (UniqueName: \"kubernetes.io/projected/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-kube-api-access-bmh7j\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.220615 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.276066 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-pphqd"] Jan 29 16:30:46 crc kubenswrapper[4895]: E0129 16:30:46.276630 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48aba20-5109-41c6-93ed-33b3e7536815" containerName="init" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.276644 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48aba20-5109-41c6-93ed-33b3e7536815" containerName="init" Jan 29 16:30:46 crc kubenswrapper[4895]: E0129 16:30:46.276663 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48aba20-5109-41c6-93ed-33b3e7536815" containerName="dnsmasq-dns" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.276670 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48aba20-5109-41c6-93ed-33b3e7536815" containerName="dnsmasq-dns" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.276857 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48aba20-5109-41c6-93ed-33b3e7536815" containerName="dnsmasq-dns" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.277983 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.295949 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-pphqd"] Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.296812 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48aba20-5109-41c6-93ed-33b3e7536815-kube-api-access-rz24j" (OuterVolumeSpecName: "kube-api-access-rz24j") pod "d48aba20-5109-41c6-93ed-33b3e7536815" (UID: "d48aba20-5109-41c6-93ed-33b3e7536815"). InnerVolumeSpecName "kube-api-access-rz24j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326314 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mdc7\" (UniqueName: \"kubernetes.io/projected/a980d03e-2583-427c-8133-5723e0eb7f69-kube-api-access-5mdc7\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326432 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-dns-svc\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326453 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326475 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmh7j\" (UniqueName: \"kubernetes.io/projected/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-kube-api-access-bmh7j\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326493 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326520 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-config\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326538 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326580 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326635 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326656 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.326704 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz24j\" (UniqueName: \"kubernetes.io/projected/d48aba20-5109-41c6-93ed-33b3e7536815-kube-api-access-rz24j\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.328645 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.337764 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-scripts\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.340372 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.358300 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.358862 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.373653 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmh7j\" (UniqueName: \"kubernetes.io/projected/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-kube-api-access-bmh7j\") pod \"cinder-scheduler-0\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.398281 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d48aba20-5109-41c6-93ed-33b3e7536815" (UID: "d48aba20-5109-41c6-93ed-33b3e7536815"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.425345 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d48aba20-5109-41c6-93ed-33b3e7536815" (UID: "d48aba20-5109-41c6-93ed-33b3e7536815"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.430543 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.431613 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mdc7\" (UniqueName: \"kubernetes.io/projected/a980d03e-2583-427c-8133-5723e0eb7f69-kube-api-access-5mdc7\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.431710 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-dns-svc\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.431887 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-config\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.432118 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.432338 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.432351 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.433426 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-config" (OuterVolumeSpecName: "config") pod "d48aba20-5109-41c6-93ed-33b3e7536815" (UID: "d48aba20-5109-41c6-93ed-33b3e7536815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.434382 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.434617 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.435657 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-config\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.439905 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-dns-svc\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.447309 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d48aba20-5109-41c6-93ed-33b3e7536815" (UID: "d48aba20-5109-41c6-93ed-33b3e7536815"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.453633 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mdc7\" (UniqueName: \"kubernetes.io/projected/a980d03e-2583-427c-8133-5723e0eb7f69-kube-api-access-5mdc7\") pod \"dnsmasq-dns-58db5546cc-pphqd\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.492089 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.494937 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.497761 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.501297 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.526440 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.535284 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.535370 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc29g\" (UniqueName: \"kubernetes.io/projected/7deb66f7-138e-44f9-a25f-1a181f755033-kube-api-access-cc29g\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.535415 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7deb66f7-138e-44f9-a25f-1a181f755033-logs\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.535483 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data-custom\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.535537 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.535584 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7deb66f7-138e-44f9-a25f-1a181f755033-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.535772 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-scripts\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.535924 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.535938 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48aba20-5109-41c6-93ed-33b3e7536815-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.552724 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.640979 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data-custom\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.641125 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.641376 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7deb66f7-138e-44f9-a25f-1a181f755033-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.641450 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-scripts\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.641734 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7deb66f7-138e-44f9-a25f-1a181f755033-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.641845 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.641908 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc29g\" (UniqueName: \"kubernetes.io/projected/7deb66f7-138e-44f9-a25f-1a181f755033-kube-api-access-cc29g\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.641970 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7deb66f7-138e-44f9-a25f-1a181f755033-logs\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.643965 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7deb66f7-138e-44f9-a25f-1a181f755033-logs\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.645964 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data-custom\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.646943 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-scripts\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.649291 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.654204 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.667726 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc29g\" (UniqueName: \"kubernetes.io/projected/7deb66f7-138e-44f9-a25f-1a181f755033-kube-api-access-cc29g\") pod \"cinder-api-0\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.830489 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.888610 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" event={"ID":"d48aba20-5109-41c6-93ed-33b3e7536815","Type":"ContainerDied","Data":"e6afbfa0bcdad981971880e374288d8de47207d17eb07c77b9ddc4c79932906a"} Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.888686 4895 scope.go:117] "RemoveContainer" containerID="3c6385b0b1bbda766365e1e435d883bc847654f869db3e4383e5c9fe53f98eaa" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.888906 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-dml95" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.958713 4895 scope.go:117] "RemoveContainer" containerID="dcecb26c5f504bff62d75772f6e30884891cac84d249aa6f61ecaddd3cf3970f" Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.971420 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-dml95"] Jan 29 16:30:46 crc kubenswrapper[4895]: I0129 16:30:46.990022 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-dml95"] Jan 29 16:30:47 crc kubenswrapper[4895]: I0129 16:30:47.055136 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48aba20-5109-41c6-93ed-33b3e7536815" path="/var/lib/kubelet/pods/d48aba20-5109-41c6-93ed-33b3e7536815/volumes" Jan 29 16:30:47 crc kubenswrapper[4895]: I0129 16:30:47.126749 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:30:47 crc kubenswrapper[4895]: W0129 16:30:47.164521 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode23c0e08_ea0e_4c36_a77a_ab61a3bbbdbb.slice/crio-bc87b806acd2a7e95c0687b595f6cbae58f4bc3cf53eb4a068e6223d2f49724a WatchSource:0}: Error finding container bc87b806acd2a7e95c0687b595f6cbae58f4bc3cf53eb4a068e6223d2f49724a: Status 404 returned error can't find the container with id bc87b806acd2a7e95c0687b595f6cbae58f4bc3cf53eb4a068e6223d2f49724a Jan 29 16:30:47 crc kubenswrapper[4895]: I0129 16:30:47.169239 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-pphqd"] Jan 29 16:30:47 crc kubenswrapper[4895]: W0129 16:30:47.169723 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda980d03e_2583_427c_8133_5723e0eb7f69.slice/crio-7707ff386c2ba18e3512c693422c68d7157cb7f114425b39ff3a622458be5bbd WatchSource:0}: Error finding container 7707ff386c2ba18e3512c693422c68d7157cb7f114425b39ff3a622458be5bbd: Status 404 returned error can't find the container with id 7707ff386c2ba18e3512c693422c68d7157cb7f114425b39ff3a622458be5bbd Jan 29 16:30:47 crc kubenswrapper[4895]: I0129 16:30:47.472644 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:30:47 crc kubenswrapper[4895]: W0129 16:30:47.502642 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7deb66f7_138e_44f9_a25f_1a181f755033.slice/crio-f56f6ba029d1ef3f2b620e41e5282f5a276baa53f6c760469553b0780b475696 WatchSource:0}: Error finding container f56f6ba029d1ef3f2b620e41e5282f5a276baa53f6c760469553b0780b475696: Status 404 returned error can't find the container with id f56f6ba029d1ef3f2b620e41e5282f5a276baa53f6c760469553b0780b475696 Jan 29 16:30:47 crc kubenswrapper[4895]: I0129 16:30:47.907111 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb","Type":"ContainerStarted","Data":"bc87b806acd2a7e95c0687b595f6cbae58f4bc3cf53eb4a068e6223d2f49724a"} Jan 29 16:30:47 crc kubenswrapper[4895]: I0129 16:30:47.927324 4895 generic.go:334] "Generic (PLEG): container finished" podID="a980d03e-2583-427c-8133-5723e0eb7f69" containerID="f56367ad6048ab35e65daf3cd990ca0a7306114f58959ae54ff9e6d7574f844f" exitCode=0 Jan 29 16:30:47 crc kubenswrapper[4895]: I0129 16:30:47.927404 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" event={"ID":"a980d03e-2583-427c-8133-5723e0eb7f69","Type":"ContainerDied","Data":"f56367ad6048ab35e65daf3cd990ca0a7306114f58959ae54ff9e6d7574f844f"} Jan 29 16:30:47 crc kubenswrapper[4895]: I0129 16:30:47.927503 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" event={"ID":"a980d03e-2583-427c-8133-5723e0eb7f69","Type":"ContainerStarted","Data":"7707ff386c2ba18e3512c693422c68d7157cb7f114425b39ff3a622458be5bbd"} Jan 29 16:30:47 crc kubenswrapper[4895]: I0129 16:30:47.941750 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7deb66f7-138e-44f9-a25f-1a181f755033","Type":"ContainerStarted","Data":"f56f6ba029d1ef3f2b620e41e5282f5a276baa53f6c760469553b0780b475696"} Jan 29 16:30:48 crc kubenswrapper[4895]: I0129 16:30:48.017293 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:48 crc kubenswrapper[4895]: I0129 16:30:48.514134 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-68f56c8b56-xwgv2" Jan 29 16:30:48 crc kubenswrapper[4895]: I0129 16:30:48.652169 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-798d9b5844-wcjfg"] Jan 29 16:30:48 crc kubenswrapper[4895]: I0129 16:30:48.652579 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-798d9b5844-wcjfg" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerName="barbican-api-log" containerID="cri-o://5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3" gracePeriod=30 Jan 29 16:30:48 crc kubenswrapper[4895]: I0129 16:30:48.653003 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-798d9b5844-wcjfg" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerName="barbican-api" containerID="cri-o://706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c" gracePeriod=30 Jan 29 16:30:48 crc kubenswrapper[4895]: I0129 16:30:48.662793 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-798d9b5844-wcjfg" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": EOF" Jan 29 16:30:48 crc kubenswrapper[4895]: I0129 16:30:48.963572 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" event={"ID":"a980d03e-2583-427c-8133-5723e0eb7f69","Type":"ContainerStarted","Data":"e5b03adefa3c5ee1f7eb9afe5c1d21736345c9ca03e9c7b6890311e1e86837f7"} Jan 29 16:30:48 crc kubenswrapper[4895]: I0129 16:30:48.964071 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:49 crc kubenswrapper[4895]: I0129 16:30:48.999917 4895 generic.go:334] "Generic (PLEG): container finished" podID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerID="5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3" exitCode=143 Jan 29 16:30:49 crc kubenswrapper[4895]: I0129 16:30:49.000072 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798d9b5844-wcjfg" event={"ID":"8546502f-24d9-407c-86ef-c12e9ccb70e4","Type":"ContainerDied","Data":"5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3"} Jan 29 16:30:49 crc kubenswrapper[4895]: I0129 16:30:49.003179 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7deb66f7-138e-44f9-a25f-1a181f755033","Type":"ContainerStarted","Data":"478f57d68d603d5960de088d411dfe81cf91c5385c7de65269aadce03918b405"} Jan 29 16:30:49 crc kubenswrapper[4895]: I0129 16:30:49.006818 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" podStartSLOduration=3.006795419 podStartE2EDuration="3.006795419s" podCreationTimestamp="2026-01-29 16:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:48.994731152 +0000 UTC m=+1132.797708436" watchObservedRunningTime="2026-01-29 16:30:49.006795419 +0000 UTC m=+1132.809772683" Jan 29 16:30:49 crc kubenswrapper[4895]: I0129 16:30:49.102661 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:30:50 crc kubenswrapper[4895]: I0129 16:30:50.030778 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb","Type":"ContainerStarted","Data":"2e8ad72a6be80bf9a1d9cdd1ab3b8daa8b16142aaedaf0e8022fbcc66fecd397"} Jan 29 16:30:50 crc kubenswrapper[4895]: I0129 16:30:50.031242 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb","Type":"ContainerStarted","Data":"3f022d840763e9a2bf3e9bcfc83324840484caa5387b92342d679bd70032b0ca"} Jan 29 16:30:50 crc kubenswrapper[4895]: I0129 16:30:50.035092 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7deb66f7-138e-44f9-a25f-1a181f755033" containerName="cinder-api-log" containerID="cri-o://478f57d68d603d5960de088d411dfe81cf91c5385c7de65269aadce03918b405" gracePeriod=30 Jan 29 16:30:50 crc kubenswrapper[4895]: I0129 16:30:50.035192 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7deb66f7-138e-44f9-a25f-1a181f755033","Type":"ContainerStarted","Data":"91d8b1587907bdbf1ca3d4f51bdba90d10f544afb556a62df472395c9dc6d69b"} Jan 29 16:30:50 crc kubenswrapper[4895]: I0129 16:30:50.035237 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 16:30:50 crc kubenswrapper[4895]: I0129 16:30:50.035277 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7deb66f7-138e-44f9-a25f-1a181f755033" containerName="cinder-api" containerID="cri-o://91d8b1587907bdbf1ca3d4f51bdba90d10f544afb556a62df472395c9dc6d69b" gracePeriod=30 Jan 29 16:30:50 crc kubenswrapper[4895]: I0129 16:30:50.062509 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.35228383 podStartE2EDuration="4.062487914s" podCreationTimestamp="2026-01-29 16:30:46 +0000 UTC" firstStartedPulling="2026-01-29 16:30:47.170171522 +0000 UTC m=+1130.973148786" lastFinishedPulling="2026-01-29 16:30:47.880375606 +0000 UTC m=+1131.683352870" observedRunningTime="2026-01-29 16:30:50.058165686 +0000 UTC m=+1133.861142950" watchObservedRunningTime="2026-01-29 16:30:50.062487914 +0000 UTC m=+1133.865465178" Jan 29 16:30:50 crc kubenswrapper[4895]: I0129 16:30:50.093405 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.093376171 podStartE2EDuration="4.093376171s" podCreationTimestamp="2026-01-29 16:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:50.087475371 +0000 UTC m=+1133.890452645" watchObservedRunningTime="2026-01-29 16:30:50.093376171 +0000 UTC m=+1133.896353435" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.052279 4895 generic.go:334] "Generic (PLEG): container finished" podID="7deb66f7-138e-44f9-a25f-1a181f755033" containerID="91d8b1587907bdbf1ca3d4f51bdba90d10f544afb556a62df472395c9dc6d69b" exitCode=0 Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.052680 4895 generic.go:334] "Generic (PLEG): container finished" podID="7deb66f7-138e-44f9-a25f-1a181f755033" containerID="478f57d68d603d5960de088d411dfe81cf91c5385c7de65269aadce03918b405" exitCode=143 Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.052326 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7deb66f7-138e-44f9-a25f-1a181f755033","Type":"ContainerDied","Data":"91d8b1587907bdbf1ca3d4f51bdba90d10f544afb556a62df472395c9dc6d69b"} Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.052807 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7deb66f7-138e-44f9-a25f-1a181f755033","Type":"ContainerDied","Data":"478f57d68d603d5960de088d411dfe81cf91c5385c7de65269aadce03918b405"} Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.052846 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7deb66f7-138e-44f9-a25f-1a181f755033","Type":"ContainerDied","Data":"f56f6ba029d1ef3f2b620e41e5282f5a276baa53f6c760469553b0780b475696"} Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.052890 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f56f6ba029d1ef3f2b620e41e5282f5a276baa53f6c760469553b0780b475696" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.062006 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.165549 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data-custom\") pod \"7deb66f7-138e-44f9-a25f-1a181f755033\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.165700 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data\") pod \"7deb66f7-138e-44f9-a25f-1a181f755033\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.165798 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7deb66f7-138e-44f9-a25f-1a181f755033-logs\") pod \"7deb66f7-138e-44f9-a25f-1a181f755033\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.165898 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-scripts\") pod \"7deb66f7-138e-44f9-a25f-1a181f755033\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.165947 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-combined-ca-bundle\") pod \"7deb66f7-138e-44f9-a25f-1a181f755033\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.166030 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc29g\" (UniqueName: \"kubernetes.io/projected/7deb66f7-138e-44f9-a25f-1a181f755033-kube-api-access-cc29g\") pod \"7deb66f7-138e-44f9-a25f-1a181f755033\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.166063 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7deb66f7-138e-44f9-a25f-1a181f755033-etc-machine-id\") pod \"7deb66f7-138e-44f9-a25f-1a181f755033\" (UID: \"7deb66f7-138e-44f9-a25f-1a181f755033\") " Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.166677 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7deb66f7-138e-44f9-a25f-1a181f755033-logs" (OuterVolumeSpecName: "logs") pod "7deb66f7-138e-44f9-a25f-1a181f755033" (UID: "7deb66f7-138e-44f9-a25f-1a181f755033"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.166759 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7deb66f7-138e-44f9-a25f-1a181f755033-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7deb66f7-138e-44f9-a25f-1a181f755033" (UID: "7deb66f7-138e-44f9-a25f-1a181f755033"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.174116 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-scripts" (OuterVolumeSpecName: "scripts") pod "7deb66f7-138e-44f9-a25f-1a181f755033" (UID: "7deb66f7-138e-44f9-a25f-1a181f755033"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.187224 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7deb66f7-138e-44f9-a25f-1a181f755033" (UID: "7deb66f7-138e-44f9-a25f-1a181f755033"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.199194 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7deb66f7-138e-44f9-a25f-1a181f755033-kube-api-access-cc29g" (OuterVolumeSpecName: "kube-api-access-cc29g") pod "7deb66f7-138e-44f9-a25f-1a181f755033" (UID: "7deb66f7-138e-44f9-a25f-1a181f755033"). InnerVolumeSpecName "kube-api-access-cc29g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.208043 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7deb66f7-138e-44f9-a25f-1a181f755033" (UID: "7deb66f7-138e-44f9-a25f-1a181f755033"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.236290 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data" (OuterVolumeSpecName: "config-data") pod "7deb66f7-138e-44f9-a25f-1a181f755033" (UID: "7deb66f7-138e-44f9-a25f-1a181f755033"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.268622 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.268657 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.268692 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc29g\" (UniqueName: \"kubernetes.io/projected/7deb66f7-138e-44f9-a25f-1a181f755033-kube-api-access-cc29g\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.268726 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7deb66f7-138e-44f9-a25f-1a181f755033-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.268736 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.268744 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7deb66f7-138e-44f9-a25f-1a181f755033-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.268753 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7deb66f7-138e-44f9-a25f-1a181f755033-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:51 crc kubenswrapper[4895]: I0129 16:30:51.499244 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.061924 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.102733 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.110548 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.132789 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:30:52 crc kubenswrapper[4895]: E0129 16:30:52.133632 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7deb66f7-138e-44f9-a25f-1a181f755033" containerName="cinder-api-log" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.133666 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7deb66f7-138e-44f9-a25f-1a181f755033" containerName="cinder-api-log" Jan 29 16:30:52 crc kubenswrapper[4895]: E0129 16:30:52.133696 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7deb66f7-138e-44f9-a25f-1a181f755033" containerName="cinder-api" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.133705 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7deb66f7-138e-44f9-a25f-1a181f755033" containerName="cinder-api" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.133929 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7deb66f7-138e-44f9-a25f-1a181f755033" containerName="cinder-api-log" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.133957 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7deb66f7-138e-44f9-a25f-1a181f755033" containerName="cinder-api" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.136068 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.139793 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.140130 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.141731 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.156436 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.186340 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-config-data\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.186440 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-config-data-custom\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.186475 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrg6t\" (UniqueName: \"kubernetes.io/projected/761f973d-98f3-4972-ab4d-60398028e804-kube-api-access-xrg6t\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.186503 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.186535 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/761f973d-98f3-4972-ab4d-60398028e804-etc-machine-id\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.186577 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-public-tls-certs\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.186619 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/761f973d-98f3-4972-ab4d-60398028e804-logs\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.186655 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.186674 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-scripts\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.288719 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-public-tls-certs\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.288815 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/761f973d-98f3-4972-ab4d-60398028e804-logs\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.288856 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.288905 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-scripts\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.288950 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-config-data\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.289011 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-config-data-custom\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.289036 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrg6t\" (UniqueName: \"kubernetes.io/projected/761f973d-98f3-4972-ab4d-60398028e804-kube-api-access-xrg6t\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.289079 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.289120 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/761f973d-98f3-4972-ab4d-60398028e804-etc-machine-id\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.289241 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/761f973d-98f3-4972-ab4d-60398028e804-etc-machine-id\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.289416 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/761f973d-98f3-4972-ab4d-60398028e804-logs\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.295266 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-public-tls-certs\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.297095 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-config-data\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.298265 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.299021 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-config-data-custom\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.300737 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.304338 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/761f973d-98f3-4972-ab4d-60398028e804-scripts\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.311476 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrg6t\" (UniqueName: \"kubernetes.io/projected/761f973d-98f3-4972-ab4d-60398028e804-kube-api-access-xrg6t\") pod \"cinder-api-0\" (UID: \"761f973d-98f3-4972-ab4d-60398028e804\") " pod="openstack/cinder-api-0" Jan 29 16:30:52 crc kubenswrapper[4895]: I0129 16:30:52.481749 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.026648 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.053110 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7deb66f7-138e-44f9-a25f-1a181f755033" path="/var/lib/kubelet/pods/7deb66f7-138e-44f9-a25f-1a181f755033/volumes" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.082004 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"761f973d-98f3-4972-ab4d-60398028e804","Type":"ContainerStarted","Data":"a5a35e2fab2a54008eb6738796b6bfa8cbf429abc0362cc950e6d34423533370"} Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.091635 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798d9b5844-wcjfg" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": read tcp 10.217.0.2:55340->10.217.0.145:9311: read: connection reset by peer" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.091799 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-798d9b5844-wcjfg" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.145:9311/healthcheck\": read tcp 10.217.0.2:55344->10.217.0.145:9311: read: connection reset by peer" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.700617 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.732208 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data\") pod \"8546502f-24d9-407c-86ef-c12e9ccb70e4\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.732316 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-combined-ca-bundle\") pod \"8546502f-24d9-407c-86ef-c12e9ccb70e4\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.732453 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6shbk\" (UniqueName: \"kubernetes.io/projected/8546502f-24d9-407c-86ef-c12e9ccb70e4-kube-api-access-6shbk\") pod \"8546502f-24d9-407c-86ef-c12e9ccb70e4\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.732521 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data-custom\") pod \"8546502f-24d9-407c-86ef-c12e9ccb70e4\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.732560 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8546502f-24d9-407c-86ef-c12e9ccb70e4-logs\") pod \"8546502f-24d9-407c-86ef-c12e9ccb70e4\" (UID: \"8546502f-24d9-407c-86ef-c12e9ccb70e4\") " Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.734084 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8546502f-24d9-407c-86ef-c12e9ccb70e4-logs" (OuterVolumeSpecName: "logs") pod "8546502f-24d9-407c-86ef-c12e9ccb70e4" (UID: "8546502f-24d9-407c-86ef-c12e9ccb70e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.746472 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8546502f-24d9-407c-86ef-c12e9ccb70e4-kube-api-access-6shbk" (OuterVolumeSpecName: "kube-api-access-6shbk") pod "8546502f-24d9-407c-86ef-c12e9ccb70e4" (UID: "8546502f-24d9-407c-86ef-c12e9ccb70e4"). InnerVolumeSpecName "kube-api-access-6shbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.767167 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8546502f-24d9-407c-86ef-c12e9ccb70e4" (UID: "8546502f-24d9-407c-86ef-c12e9ccb70e4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.784191 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8546502f-24d9-407c-86ef-c12e9ccb70e4" (UID: "8546502f-24d9-407c-86ef-c12e9ccb70e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.809695 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data" (OuterVolumeSpecName: "config-data") pod "8546502f-24d9-407c-86ef-c12e9ccb70e4" (UID: "8546502f-24d9-407c-86ef-c12e9ccb70e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.835515 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.835570 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8546502f-24d9-407c-86ef-c12e9ccb70e4-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.835585 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.835606 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8546502f-24d9-407c-86ef-c12e9ccb70e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:53 crc kubenswrapper[4895]: I0129 16:30:53.835619 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6shbk\" (UniqueName: \"kubernetes.io/projected/8546502f-24d9-407c-86ef-c12e9ccb70e4-kube-api-access-6shbk\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.095206 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"761f973d-98f3-4972-ab4d-60398028e804","Type":"ContainerStarted","Data":"ff109d80db9955d4fa88ec6a2ed80de43aed4a7ae757dc1f5171f98f0c6bbffd"} Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.099857 4895 generic.go:334] "Generic (PLEG): container finished" podID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerID="706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c" exitCode=0 Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.099932 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798d9b5844-wcjfg" Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.099948 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798d9b5844-wcjfg" event={"ID":"8546502f-24d9-407c-86ef-c12e9ccb70e4","Type":"ContainerDied","Data":"706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c"} Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.100379 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798d9b5844-wcjfg" event={"ID":"8546502f-24d9-407c-86ef-c12e9ccb70e4","Type":"ContainerDied","Data":"259772f56e79b5d343ef4f40e04252fbc022d06e2dae2c581ff2d1827a751cb9"} Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.100451 4895 scope.go:117] "RemoveContainer" containerID="706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c" Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.147007 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-798d9b5844-wcjfg"] Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.149984 4895 scope.go:117] "RemoveContainer" containerID="5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3" Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.155202 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-798d9b5844-wcjfg"] Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.192425 4895 scope.go:117] "RemoveContainer" containerID="706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c" Jan 29 16:30:54 crc kubenswrapper[4895]: E0129 16:30:54.193233 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c\": container with ID starting with 706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c not found: ID does not exist" containerID="706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c" Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.193306 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c"} err="failed to get container status \"706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c\": rpc error: code = NotFound desc = could not find container \"706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c\": container with ID starting with 706e51f9f70ae70d393f92f289f507e4fe393a0bac117e17372329dc94c2ff2c not found: ID does not exist" Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.193366 4895 scope.go:117] "RemoveContainer" containerID="5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3" Jan 29 16:30:54 crc kubenswrapper[4895]: E0129 16:30:54.203486 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3\": container with ID starting with 5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3 not found: ID does not exist" containerID="5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3" Jan 29 16:30:54 crc kubenswrapper[4895]: I0129 16:30:54.203628 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3"} err="failed to get container status \"5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3\": rpc error: code = NotFound desc = could not find container \"5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3\": container with ID starting with 5e2394e8361b2bbc59b7324322c2f5d5a7f316bc779fa2d88517761f9ef175a3 not found: ID does not exist" Jan 29 16:30:55 crc kubenswrapper[4895]: I0129 16:30:55.046702 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" path="/var/lib/kubelet/pods/8546502f-24d9-407c-86ef-c12e9ccb70e4/volumes" Jan 29 16:30:55 crc kubenswrapper[4895]: I0129 16:30:55.121753 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"761f973d-98f3-4972-ab4d-60398028e804","Type":"ContainerStarted","Data":"8a182cf2ee7bd85274423acec6335b8777c791eddaa77fd06dd4d87d40d94307"} Jan 29 16:30:55 crc kubenswrapper[4895]: I0129 16:30:55.122169 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 16:30:55 crc kubenswrapper[4895]: I0129 16:30:55.150608 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.150583862 podStartE2EDuration="3.150583862s" podCreationTimestamp="2026-01-29 16:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:55.148073284 +0000 UTC m=+1138.951050578" watchObservedRunningTime="2026-01-29 16:30:55.150583862 +0000 UTC m=+1138.953561136" Jan 29 16:30:56 crc kubenswrapper[4895]: I0129 16:30:56.529943 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:30:56 crc kubenswrapper[4895]: I0129 16:30:56.627853 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fvrcj"] Jan 29 16:30:56 crc kubenswrapper[4895]: I0129 16:30:56.628257 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" podUID="01f1b52b-d5cc-4dfe-9fe8-709c3d996048" containerName="dnsmasq-dns" containerID="cri-o://8e4dc383e87e986691717d7c40b793926c59af372a1332a6a4e0b4c2d831d90f" gracePeriod=10 Jan 29 16:30:56 crc kubenswrapper[4895]: I0129 16:30:56.835817 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 16:30:56 crc kubenswrapper[4895]: I0129 16:30:56.928794 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.158640 4895 generic.go:334] "Generic (PLEG): container finished" podID="01f1b52b-d5cc-4dfe-9fe8-709c3d996048" containerID="8e4dc383e87e986691717d7c40b793926c59af372a1332a6a4e0b4c2d831d90f" exitCode=0 Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.159020 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" containerName="cinder-scheduler" containerID="cri-o://3f022d840763e9a2bf3e9bcfc83324840484caa5387b92342d679bd70032b0ca" gracePeriod=30 Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.159137 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" event={"ID":"01f1b52b-d5cc-4dfe-9fe8-709c3d996048","Type":"ContainerDied","Data":"8e4dc383e87e986691717d7c40b793926c59af372a1332a6a4e0b4c2d831d90f"} Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.159623 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" containerName="probe" containerID="cri-o://2e8ad72a6be80bf9a1d9cdd1ab3b8daa8b16142aaedaf0e8022fbcc66fecd397" gracePeriod=30 Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.299565 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.327344 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-sb\") pod \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.417609 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "01f1b52b-d5cc-4dfe-9fe8-709c3d996048" (UID: "01f1b52b-d5cc-4dfe-9fe8-709c3d996048"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.433229 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rkck\" (UniqueName: \"kubernetes.io/projected/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-kube-api-access-6rkck\") pod \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.433413 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-nb\") pod \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.433444 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-config\") pod \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.433551 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-dns-svc\") pod \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\" (UID: \"01f1b52b-d5cc-4dfe-9fe8-709c3d996048\") " Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.434004 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.465122 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-kube-api-access-6rkck" (OuterVolumeSpecName: "kube-api-access-6rkck") pod "01f1b52b-d5cc-4dfe-9fe8-709c3d996048" (UID: "01f1b52b-d5cc-4dfe-9fe8-709c3d996048"). InnerVolumeSpecName "kube-api-access-6rkck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.521690 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "01f1b52b-d5cc-4dfe-9fe8-709c3d996048" (UID: "01f1b52b-d5cc-4dfe-9fe8-709c3d996048"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.537474 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.537523 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rkck\" (UniqueName: \"kubernetes.io/projected/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-kube-api-access-6rkck\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.538942 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "01f1b52b-d5cc-4dfe-9fe8-709c3d996048" (UID: "01f1b52b-d5cc-4dfe-9fe8-709c3d996048"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.567047 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-config" (OuterVolumeSpecName: "config") pod "01f1b52b-d5cc-4dfe-9fe8-709c3d996048" (UID: "01f1b52b-d5cc-4dfe-9fe8-709c3d996048"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.639347 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:57 crc kubenswrapper[4895]: I0129 16:30:57.639407 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f1b52b-d5cc-4dfe-9fe8-709c3d996048-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:58 crc kubenswrapper[4895]: I0129 16:30:58.176407 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" event={"ID":"01f1b52b-d5cc-4dfe-9fe8-709c3d996048","Type":"ContainerDied","Data":"fe334fd05b21e5375dbf9f3643c540dac27aaede1760180b645d73e7e5f7704d"} Jan 29 16:30:58 crc kubenswrapper[4895]: I0129 16:30:58.176525 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-fvrcj" Jan 29 16:30:58 crc kubenswrapper[4895]: I0129 16:30:58.176738 4895 scope.go:117] "RemoveContainer" containerID="8e4dc383e87e986691717d7c40b793926c59af372a1332a6a4e0b4c2d831d90f" Jan 29 16:30:58 crc kubenswrapper[4895]: I0129 16:30:58.240849 4895 scope.go:117] "RemoveContainer" containerID="f999ac46b7f073a1167b9d590f67f5ec53429367096698286d558468e230f284" Jan 29 16:30:58 crc kubenswrapper[4895]: I0129 16:30:58.244268 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fvrcj"] Jan 29 16:30:58 crc kubenswrapper[4895]: I0129 16:30:58.255268 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-fvrcj"] Jan 29 16:30:59 crc kubenswrapper[4895]: I0129 16:30:59.047365 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f1b52b-d5cc-4dfe-9fe8-709c3d996048" path="/var/lib/kubelet/pods/01f1b52b-d5cc-4dfe-9fe8-709c3d996048/volumes" Jan 29 16:30:59 crc kubenswrapper[4895]: I0129 16:30:59.190273 4895 generic.go:334] "Generic (PLEG): container finished" podID="e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" containerID="2e8ad72a6be80bf9a1d9cdd1ab3b8daa8b16142aaedaf0e8022fbcc66fecd397" exitCode=0 Jan 29 16:30:59 crc kubenswrapper[4895]: I0129 16:30:59.190284 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb","Type":"ContainerDied","Data":"2e8ad72a6be80bf9a1d9cdd1ab3b8daa8b16142aaedaf0e8022fbcc66fecd397"} Jan 29 16:31:01 crc kubenswrapper[4895]: I0129 16:31:01.213242 4895 generic.go:334] "Generic (PLEG): container finished" podID="e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" containerID="3f022d840763e9a2bf3e9bcfc83324840484caa5387b92342d679bd70032b0ca" exitCode=0 Jan 29 16:31:01 crc kubenswrapper[4895]: I0129 16:31:01.213549 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb","Type":"ContainerDied","Data":"3f022d840763e9a2bf3e9bcfc83324840484caa5387b92342d679bd70032b0ca"} Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.196652 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.238542 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.513423 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7f656cd776-2tcds"] Jan 29 16:31:02 crc kubenswrapper[4895]: E0129 16:31:02.514622 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerName="barbican-api" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.514645 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerName="barbican-api" Jan 29 16:31:02 crc kubenswrapper[4895]: E0129 16:31:02.514668 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f1b52b-d5cc-4dfe-9fe8-709c3d996048" containerName="dnsmasq-dns" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.514678 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f1b52b-d5cc-4dfe-9fe8-709c3d996048" containerName="dnsmasq-dns" Jan 29 16:31:02 crc kubenswrapper[4895]: E0129 16:31:02.514691 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f1b52b-d5cc-4dfe-9fe8-709c3d996048" containerName="init" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.514697 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f1b52b-d5cc-4dfe-9fe8-709c3d996048" containerName="init" Jan 29 16:31:02 crc kubenswrapper[4895]: E0129 16:31:02.514717 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerName="barbican-api-log" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.514723 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerName="barbican-api-log" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.514894 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerName="barbican-api-log" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.514916 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8546502f-24d9-407c-86ef-c12e9ccb70e4" containerName="barbican-api" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.514928 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f1b52b-d5cc-4dfe-9fe8-709c3d996048" containerName="dnsmasq-dns" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.515951 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.529250 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f656cd776-2tcds"] Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.679071 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-internal-tls-certs\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.679320 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-combined-ca-bundle\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.679377 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-logs\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.679411 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvqtd\" (UniqueName: \"kubernetes.io/projected/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-kube-api-access-mvqtd\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.680008 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-config-data\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.680077 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-scripts\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.680111 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-public-tls-certs\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.782325 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-internal-tls-certs\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.782412 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-combined-ca-bundle\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.782443 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-logs\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.782475 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvqtd\" (UniqueName: \"kubernetes.io/projected/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-kube-api-access-mvqtd\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.783162 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-config-data\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.783185 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-logs\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.783503 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-scripts\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.784253 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-public-tls-certs\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.805235 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-config-data\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.805563 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-internal-tls-certs\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.805795 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-combined-ca-bundle\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.810025 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-scripts\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.812473 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-public-tls-certs\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.813003 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvqtd\" (UniqueName: \"kubernetes.io/projected/49a0b65b-0a7a-4681-9f06-e1a411e1e8d3-kube-api-access-mvqtd\") pod \"placement-7f656cd776-2tcds\" (UID: \"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3\") " pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.858762 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:02 crc kubenswrapper[4895]: I0129 16:31:02.924310 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7f897d48fd-hgqsw" Jan 29 16:31:03 crc kubenswrapper[4895]: I0129 16:31:03.853628 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 16:31:03 crc kubenswrapper[4895]: I0129 16:31:03.856363 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:31:03 crc kubenswrapper[4895]: I0129 16:31:03.860453 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 16:31:03 crc kubenswrapper[4895]: I0129 16:31:03.860474 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 16:31:03 crc kubenswrapper[4895]: I0129 16:31:03.861694 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-rvxvc" Jan 29 16:31:03 crc kubenswrapper[4895]: I0129 16:31:03.869256 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.021173 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config-secret\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.021242 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.021318 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnpxt\" (UniqueName: \"kubernetes.io/projected/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-kube-api-access-wnpxt\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.021468 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.122843 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.122960 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config-secret\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.122996 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.123031 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnpxt\" (UniqueName: \"kubernetes.io/projected/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-kube-api-access-wnpxt\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.124593 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.131724 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config-secret\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.131765 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.153734 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnpxt\" (UniqueName: \"kubernetes.io/projected/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-kube-api-access-wnpxt\") pod \"openstackclient\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.193212 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.248571 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.262128 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.276654 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.278197 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.297460 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.433328 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50b4deb-a738-4cac-9481-b4085086c116-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.433418 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d50b4deb-a738-4cac-9481-b4085086c116-openstack-config\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.433455 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4c4p\" (UniqueName: \"kubernetes.io/projected/d50b4deb-a738-4cac-9481-b4085086c116-kube-api-access-r4c4p\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.433506 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d50b4deb-a738-4cac-9481-b4085086c116-openstack-config-secret\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.535782 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50b4deb-a738-4cac-9481-b4085086c116-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.535899 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d50b4deb-a738-4cac-9481-b4085086c116-openstack-config\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.535938 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4c4p\" (UniqueName: \"kubernetes.io/projected/d50b4deb-a738-4cac-9481-b4085086c116-kube-api-access-r4c4p\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.535992 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d50b4deb-a738-4cac-9481-b4085086c116-openstack-config-secret\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.538085 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d50b4deb-a738-4cac-9481-b4085086c116-openstack-config\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.540346 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d50b4deb-a738-4cac-9481-b4085086c116-openstack-config-secret\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.549858 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50b4deb-a738-4cac-9481-b4085086c116-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.567512 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4c4p\" (UniqueName: \"kubernetes.io/projected/d50b4deb-a738-4cac-9481-b4085086c116-kube-api-access-r4c4p\") pod \"openstackclient\" (UID: \"d50b4deb-a738-4cac-9481-b4085086c116\") " pod="openstack/openstackclient" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.572745 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 16:31:04 crc kubenswrapper[4895]: I0129 16:31:04.605527 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:31:05 crc kubenswrapper[4895]: I0129 16:31:05.691428 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.041385 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.233782 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data-custom\") pod \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.234385 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-combined-ca-bundle\") pod \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.234451 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data\") pod \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.234510 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmh7j\" (UniqueName: \"kubernetes.io/projected/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-kube-api-access-bmh7j\") pod \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.234546 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-etc-machine-id\") pod \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.234601 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-scripts\") pod \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\" (UID: \"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb\") " Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.234808 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" (UID: "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.235107 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.244010 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-kube-api-access-bmh7j" (OuterVolumeSpecName: "kube-api-access-bmh7j") pod "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" (UID: "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb"). InnerVolumeSpecName "kube-api-access-bmh7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.244442 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-scripts" (OuterVolumeSpecName: "scripts") pod "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" (UID: "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.265200 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" (UID: "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.317424 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08ccd04e-f148-46a4-88aa-b488fa132756","Type":"ContainerStarted","Data":"fa24060cf4dbf13e67c4c3a11768e21e58d6fef85c653e0b3348943c9b082f6f"} Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.318377 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.332671 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb","Type":"ContainerDied","Data":"bc87b806acd2a7e95c0687b595f6cbae58f4bc3cf53eb4a068e6223d2f49724a"} Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.333071 4895 scope.go:117] "RemoveContainer" containerID="2e8ad72a6be80bf9a1d9cdd1ab3b8daa8b16142aaedaf0e8022fbcc66fecd397" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.333222 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.337231 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.337304 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.337320 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmh7j\" (UniqueName: \"kubernetes.io/projected/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-kube-api-access-bmh7j\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.344992 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" (UID: "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.358297 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.456474078 podStartE2EDuration="35.358275952s" podCreationTimestamp="2026-01-29 16:30:32 +0000 UTC" firstStartedPulling="2026-01-29 16:30:34.062337377 +0000 UTC m=+1117.865314641" lastFinishedPulling="2026-01-29 16:31:06.964139251 +0000 UTC m=+1150.767116515" observedRunningTime="2026-01-29 16:31:07.356078442 +0000 UTC m=+1151.159055716" watchObservedRunningTime="2026-01-29 16:31:07.358275952 +0000 UTC m=+1151.161253216" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.430058 4895 scope.go:117] "RemoveContainer" containerID="3f022d840763e9a2bf3e9bcfc83324840484caa5387b92342d679bd70032b0ca" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.439010 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.493630 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f656cd776-2tcds"] Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.517960 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data" (OuterVolumeSpecName: "config-data") pod "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" (UID: "e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.541778 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.589359 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 16:31:07 crc kubenswrapper[4895]: W0129 16:31:07.600335 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd50b4deb_a738_4cac_9481_b4085086c116.slice/crio-78027534b63cbed08947ab2858e55bf9e047da122edea13f7293bd675e9af92e WatchSource:0}: Error finding container 78027534b63cbed08947ab2858e55bf9e047da122edea13f7293bd675e9af92e: Status 404 returned error can't find the container with id 78027534b63cbed08947ab2858e55bf9e047da122edea13f7293bd675e9af92e Jan 29 16:31:07 crc kubenswrapper[4895]: E0129 16:31:07.644310 4895 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 16:31:07 crc kubenswrapper[4895]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_9447397f-1ba6-4a3c-a62b-e8d37c6eee62_0(f297817eb9d9686cb7e966ff0afab7430668743f58ac7ab8929d37a7858b60bb): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f297817eb9d9686cb7e966ff0afab7430668743f58ac7ab8929d37a7858b60bb" Netns:"/var/run/netns/1e9f31dd-1dd1-4e09-8782-af3218d9fe83" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f297817eb9d9686cb7e966ff0afab7430668743f58ac7ab8929d37a7858b60bb;K8S_POD_UID=9447397f-1ba6-4a3c-a62b-e8d37c6eee62" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/9447397f-1ba6-4a3c-a62b-e8d37c6eee62]: expected pod UID "9447397f-1ba6-4a3c-a62b-e8d37c6eee62" but got "d50b4deb-a738-4cac-9481-b4085086c116" from Kube API Jan 29 16:31:07 crc kubenswrapper[4895]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 16:31:07 crc kubenswrapper[4895]: > Jan 29 16:31:07 crc kubenswrapper[4895]: E0129 16:31:07.644379 4895 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 16:31:07 crc kubenswrapper[4895]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_9447397f-1ba6-4a3c-a62b-e8d37c6eee62_0(f297817eb9d9686cb7e966ff0afab7430668743f58ac7ab8929d37a7858b60bb): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f297817eb9d9686cb7e966ff0afab7430668743f58ac7ab8929d37a7858b60bb" Netns:"/var/run/netns/1e9f31dd-1dd1-4e09-8782-af3218d9fe83" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f297817eb9d9686cb7e966ff0afab7430668743f58ac7ab8929d37a7858b60bb;K8S_POD_UID=9447397f-1ba6-4a3c-a62b-e8d37c6eee62" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/9447397f-1ba6-4a3c-a62b-e8d37c6eee62]: expected pod UID "9447397f-1ba6-4a3c-a62b-e8d37c6eee62" but got "d50b4deb-a738-4cac-9481-b4085086c116" from Kube API Jan 29 16:31:07 crc kubenswrapper[4895]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 16:31:07 crc kubenswrapper[4895]: > pod="openstack/openstackclient" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.689823 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.717023 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.733513 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:31:07 crc kubenswrapper[4895]: E0129 16:31:07.734254 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" containerName="probe" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.734287 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" containerName="probe" Jan 29 16:31:07 crc kubenswrapper[4895]: E0129 16:31:07.734329 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" containerName="cinder-scheduler" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.734339 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" containerName="cinder-scheduler" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.734565 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" containerName="probe" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.734601 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" containerName="cinder-scheduler" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.735901 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.736040 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.740025 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.745291 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.745379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-scripts\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.745402 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.745425 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-config-data\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.745464 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf6f1e04-8de8-41c4-816a-b2293ca9886e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.745502 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92dpd\" (UniqueName: \"kubernetes.io/projected/cf6f1e04-8de8-41c4-816a-b2293ca9886e-kube-api-access-92dpd\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.847131 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf6f1e04-8de8-41c4-816a-b2293ca9886e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.847215 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92dpd\" (UniqueName: \"kubernetes.io/projected/cf6f1e04-8de8-41c4-816a-b2293ca9886e-kube-api-access-92dpd\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.847260 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf6f1e04-8de8-41c4-816a-b2293ca9886e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.847314 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.847420 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-scripts\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.847450 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.847485 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-config-data\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.851269 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.852411 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.854440 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-config-data\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.855065 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6f1e04-8de8-41c4-816a-b2293ca9886e-scripts\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:07 crc kubenswrapper[4895]: I0129 16:31:07.868141 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92dpd\" (UniqueName: \"kubernetes.io/projected/cf6f1e04-8de8-41c4-816a-b2293ca9886e-kube-api-access-92dpd\") pod \"cinder-scheduler-0\" (UID: \"cf6f1e04-8de8-41c4-816a-b2293ca9886e\") " pod="openstack/cinder-scheduler-0" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.076709 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.364195 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f656cd776-2tcds" event={"ID":"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3","Type":"ContainerStarted","Data":"5b0d08a3b0ca6a0dcb6ba5462fe69b96a9640c55c7c43275287dc251a53f1a03"} Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.365155 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f656cd776-2tcds" event={"ID":"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3","Type":"ContainerStarted","Data":"76cbb1cff65a0c6ff775e62016ff915c59c40aa433f24192815e155717582f00"} Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.365195 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f656cd776-2tcds" event={"ID":"49a0b65b-0a7a-4681-9f06-e1a411e1e8d3","Type":"ContainerStarted","Data":"84ad10aea2b070e512d77fa1657b643e1f02bee20e86b4e224d46100e7a66aaf"} Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.367037 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.367080 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.381209 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d50b4deb-a738-4cac-9481-b4085086c116","Type":"ContainerStarted","Data":"78027534b63cbed08947ab2858e55bf9e047da122edea13f7293bd675e9af92e"} Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.394148 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.409400 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9447397f-1ba6-4a3c-a62b-e8d37c6eee62" podUID="d50b4deb-a738-4cac-9481-b4085086c116" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.418463 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7f656cd776-2tcds" podStartSLOduration=6.418432317 podStartE2EDuration="6.418432317s" podCreationTimestamp="2026-01-29 16:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:08.397363026 +0000 UTC m=+1152.200340300" watchObservedRunningTime="2026-01-29 16:31:08.418432317 +0000 UTC m=+1152.221409581" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.421088 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.430153 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9447397f-1ba6-4a3c-a62b-e8d37c6eee62" podUID="d50b4deb-a738-4cac-9481-b4085086c116" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.470662 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-combined-ca-bundle\") pod \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.470719 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config-secret\") pod \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.470790 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config\") pod \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.470980 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnpxt\" (UniqueName: \"kubernetes.io/projected/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-kube-api-access-wnpxt\") pod \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\" (UID: \"9447397f-1ba6-4a3c-a62b-e8d37c6eee62\") " Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.471745 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9447397f-1ba6-4a3c-a62b-e8d37c6eee62" (UID: "9447397f-1ba6-4a3c-a62b-e8d37c6eee62"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.473431 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.481269 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9447397f-1ba6-4a3c-a62b-e8d37c6eee62" (UID: "9447397f-1ba6-4a3c-a62b-e8d37c6eee62"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.481330 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9447397f-1ba6-4a3c-a62b-e8d37c6eee62" (UID: "9447397f-1ba6-4a3c-a62b-e8d37c6eee62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.498078 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-kube-api-access-wnpxt" (OuterVolumeSpecName: "kube-api-access-wnpxt") pod "9447397f-1ba6-4a3c-a62b-e8d37c6eee62" (UID: "9447397f-1ba6-4a3c-a62b-e8d37c6eee62"). InnerVolumeSpecName "kube-api-access-wnpxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.576374 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.576421 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.576433 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnpxt\" (UniqueName: \"kubernetes.io/projected/9447397f-1ba6-4a3c-a62b-e8d37c6eee62-kube-api-access-wnpxt\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.666419 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.776071 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d896887bc-zxx6j" Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.873198 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77cbdff676-qn2gg"] Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.874053 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77cbdff676-qn2gg" podUID="776df2d4-3174-4c65-9966-658b00bc63fa" containerName="neutron-api" containerID="cri-o://59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8" gracePeriod=30 Jan 29 16:31:08 crc kubenswrapper[4895]: I0129 16:31:08.874309 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77cbdff676-qn2gg" podUID="776df2d4-3174-4c65-9966-658b00bc63fa" containerName="neutron-httpd" containerID="cri-o://b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7" gracePeriod=30 Jan 29 16:31:09 crc kubenswrapper[4895]: I0129 16:31:09.053186 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9447397f-1ba6-4a3c-a62b-e8d37c6eee62" path="/var/lib/kubelet/pods/9447397f-1ba6-4a3c-a62b-e8d37c6eee62/volumes" Jan 29 16:31:09 crc kubenswrapper[4895]: I0129 16:31:09.053727 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb" path="/var/lib/kubelet/pods/e23c0e08-ea0e-4c36-a77a-ab61a3bbbdbb/volumes" Jan 29 16:31:09 crc kubenswrapper[4895]: I0129 16:31:09.409290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf6f1e04-8de8-41c4-816a-b2293ca9886e","Type":"ContainerStarted","Data":"a330b367a2f5a7be041fad5336f675d0e74935ec3b9801e1d87a78cce6583332"} Jan 29 16:31:09 crc kubenswrapper[4895]: I0129 16:31:09.418800 4895 generic.go:334] "Generic (PLEG): container finished" podID="776df2d4-3174-4c65-9966-658b00bc63fa" containerID="b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7" exitCode=0 Jan 29 16:31:09 crc kubenswrapper[4895]: I0129 16:31:09.418921 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:31:09 crc kubenswrapper[4895]: I0129 16:31:09.419647 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbdff676-qn2gg" event={"ID":"776df2d4-3174-4c65-9966-658b00bc63fa","Type":"ContainerDied","Data":"b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7"} Jan 29 16:31:09 crc kubenswrapper[4895]: I0129 16:31:09.426374 4895 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9447397f-1ba6-4a3c-a62b-e8d37c6eee62" podUID="d50b4deb-a738-4cac-9481-b4085086c116" Jan 29 16:31:09 crc kubenswrapper[4895]: I0129 16:31:09.980694 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-h9tc2"] Jan 29 16:31:09 crc kubenswrapper[4895]: I0129 16:31:09.982252 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h9tc2" Jan 29 16:31:09 crc kubenswrapper[4895]: I0129 16:31:09.999344 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-h9tc2"] Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.008826 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bce84108-fcb8-4d23-8575-95458b165761-operator-scripts\") pod \"nova-api-db-create-h9tc2\" (UID: \"bce84108-fcb8-4d23-8575-95458b165761\") " pod="openstack/nova-api-db-create-h9tc2" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.008900 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4qz4\" (UniqueName: \"kubernetes.io/projected/bce84108-fcb8-4d23-8575-95458b165761-kube-api-access-z4qz4\") pod \"nova-api-db-create-h9tc2\" (UID: \"bce84108-fcb8-4d23-8575-95458b165761\") " pod="openstack/nova-api-db-create-h9tc2" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.092179 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xdhz7"] Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.093480 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xdhz7" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.111553 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bce84108-fcb8-4d23-8575-95458b165761-operator-scripts\") pod \"nova-api-db-create-h9tc2\" (UID: \"bce84108-fcb8-4d23-8575-95458b165761\") " pod="openstack/nova-api-db-create-h9tc2" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.111623 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4qz4\" (UniqueName: \"kubernetes.io/projected/bce84108-fcb8-4d23-8575-95458b165761-kube-api-access-z4qz4\") pod \"nova-api-db-create-h9tc2\" (UID: \"bce84108-fcb8-4d23-8575-95458b165761\") " pod="openstack/nova-api-db-create-h9tc2" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.111704 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-operator-scripts\") pod \"nova-cell0-db-create-xdhz7\" (UID: \"0c2d8de9-c774-4b94-8bab-0ba6b70dde52\") " pod="openstack/nova-cell0-db-create-xdhz7" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.111730 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x97d\" (UniqueName: \"kubernetes.io/projected/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-kube-api-access-4x97d\") pod \"nova-cell0-db-create-xdhz7\" (UID: \"0c2d8de9-c774-4b94-8bab-0ba6b70dde52\") " pod="openstack/nova-cell0-db-create-xdhz7" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.112989 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bce84108-fcb8-4d23-8575-95458b165761-operator-scripts\") pod \"nova-api-db-create-h9tc2\" (UID: \"bce84108-fcb8-4d23-8575-95458b165761\") " pod="openstack/nova-api-db-create-h9tc2" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.125139 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xdhz7"] Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.133089 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4qz4\" (UniqueName: \"kubernetes.io/projected/bce84108-fcb8-4d23-8575-95458b165761-kube-api-access-z4qz4\") pod \"nova-api-db-create-h9tc2\" (UID: \"bce84108-fcb8-4d23-8575-95458b165761\") " pod="openstack/nova-api-db-create-h9tc2" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.214789 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-operator-scripts\") pod \"nova-cell0-db-create-xdhz7\" (UID: \"0c2d8de9-c774-4b94-8bab-0ba6b70dde52\") " pod="openstack/nova-cell0-db-create-xdhz7" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.214884 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x97d\" (UniqueName: \"kubernetes.io/projected/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-kube-api-access-4x97d\") pod \"nova-cell0-db-create-xdhz7\" (UID: \"0c2d8de9-c774-4b94-8bab-0ba6b70dde52\") " pod="openstack/nova-cell0-db-create-xdhz7" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.216802 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-operator-scripts\") pod \"nova-cell0-db-create-xdhz7\" (UID: \"0c2d8de9-c774-4b94-8bab-0ba6b70dde52\") " pod="openstack/nova-cell0-db-create-xdhz7" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.324625 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-922d-account-create-update-r88cx"] Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.325906 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-922d-account-create-update-r88cx" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.332495 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h9tc2" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.341565 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.424145 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x97d\" (UniqueName: \"kubernetes.io/projected/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-kube-api-access-4x97d\") pod \"nova-cell0-db-create-xdhz7\" (UID: \"0c2d8de9-c774-4b94-8bab-0ba6b70dde52\") " pod="openstack/nova-cell0-db-create-xdhz7" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.430707 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd22v\" (UniqueName: \"kubernetes.io/projected/b3707acb-376a-4821-b6cb-6f1e0a617931-kube-api-access-bd22v\") pod \"nova-api-922d-account-create-update-r88cx\" (UID: \"b3707acb-376a-4821-b6cb-6f1e0a617931\") " pod="openstack/nova-api-922d-account-create-update-r88cx" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.430841 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3707acb-376a-4821-b6cb-6f1e0a617931-operator-scripts\") pod \"nova-api-922d-account-create-update-r88cx\" (UID: \"b3707acb-376a-4821-b6cb-6f1e0a617931\") " pod="openstack/nova-api-922d-account-create-update-r88cx" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.438988 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-sb5kw"] Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.441697 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sb5kw" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.520960 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-922d-account-create-update-r88cx"] Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.570185 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf6f1e04-8de8-41c4-816a-b2293ca9886e","Type":"ContainerStarted","Data":"acc2ae0e5484977758cf2d9b2030d52f24fb2fe724e227e9344de5b2d72a859c"} Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.571680 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd22v\" (UniqueName: \"kubernetes.io/projected/b3707acb-376a-4821-b6cb-6f1e0a617931-kube-api-access-bd22v\") pod \"nova-api-922d-account-create-update-r88cx\" (UID: \"b3707acb-376a-4821-b6cb-6f1e0a617931\") " pod="openstack/nova-api-922d-account-create-update-r88cx" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.571733 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35221a68-d9d1-4630-8ade-81eca4fb1a57-operator-scripts\") pod \"nova-cell1-db-create-sb5kw\" (UID: \"35221a68-d9d1-4630-8ade-81eca4fb1a57\") " pod="openstack/nova-cell1-db-create-sb5kw" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.571829 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3707acb-376a-4821-b6cb-6f1e0a617931-operator-scripts\") pod \"nova-api-922d-account-create-update-r88cx\" (UID: \"b3707acb-376a-4821-b6cb-6f1e0a617931\") " pod="openstack/nova-api-922d-account-create-update-r88cx" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.571860 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6fw4\" (UniqueName: \"kubernetes.io/projected/35221a68-d9d1-4630-8ade-81eca4fb1a57-kube-api-access-h6fw4\") pod \"nova-cell1-db-create-sb5kw\" (UID: \"35221a68-d9d1-4630-8ade-81eca4fb1a57\") " pod="openstack/nova-cell1-db-create-sb5kw" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.573121 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3707acb-376a-4821-b6cb-6f1e0a617931-operator-scripts\") pod \"nova-api-922d-account-create-update-r88cx\" (UID: \"b3707acb-376a-4821-b6cb-6f1e0a617931\") " pod="openstack/nova-api-922d-account-create-update-r88cx" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.573169 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sb5kw"] Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.633015 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd22v\" (UniqueName: \"kubernetes.io/projected/b3707acb-376a-4821-b6cb-6f1e0a617931-kube-api-access-bd22v\") pod \"nova-api-922d-account-create-update-r88cx\" (UID: \"b3707acb-376a-4821-b6cb-6f1e0a617931\") " pod="openstack/nova-api-922d-account-create-update-r88cx" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.672515 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5ca0-account-create-update-rpvdg"] Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.674420 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.676223 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35221a68-d9d1-4630-8ade-81eca4fb1a57-operator-scripts\") pod \"nova-cell1-db-create-sb5kw\" (UID: \"35221a68-d9d1-4630-8ade-81eca4fb1a57\") " pod="openstack/nova-cell1-db-create-sb5kw" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.676354 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6fw4\" (UniqueName: \"kubernetes.io/projected/35221a68-d9d1-4630-8ade-81eca4fb1a57-kube-api-access-h6fw4\") pod \"nova-cell1-db-create-sb5kw\" (UID: \"35221a68-d9d1-4630-8ade-81eca4fb1a57\") " pod="openstack/nova-cell1-db-create-sb5kw" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.678501 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35221a68-d9d1-4630-8ade-81eca4fb1a57-operator-scripts\") pod \"nova-cell1-db-create-sb5kw\" (UID: \"35221a68-d9d1-4630-8ade-81eca4fb1a57\") " pod="openstack/nova-cell1-db-create-sb5kw" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.697278 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-922d-account-create-update-r88cx" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.699024 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5ca0-account-create-update-rpvdg"] Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.709888 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.717902 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xdhz7" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.746305 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6fw4\" (UniqueName: \"kubernetes.io/projected/35221a68-d9d1-4630-8ade-81eca4fb1a57-kube-api-access-h6fw4\") pod \"nova-cell1-db-create-sb5kw\" (UID: \"35221a68-d9d1-4630-8ade-81eca4fb1a57\") " pod="openstack/nova-cell1-db-create-sb5kw" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.778896 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqt5j\" (UniqueName: \"kubernetes.io/projected/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-kube-api-access-nqt5j\") pod \"nova-cell0-5ca0-account-create-update-rpvdg\" (UID: \"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278\") " pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.779055 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-operator-scripts\") pod \"nova-cell0-5ca0-account-create-update-rpvdg\" (UID: \"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278\") " pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.866904 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sb5kw" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.881296 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqt5j\" (UniqueName: \"kubernetes.io/projected/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-kube-api-access-nqt5j\") pod \"nova-cell0-5ca0-account-create-update-rpvdg\" (UID: \"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278\") " pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.882574 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-operator-scripts\") pod \"nova-cell0-5ca0-account-create-update-rpvdg\" (UID: \"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278\") " pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.881857 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-operator-scripts\") pod \"nova-cell0-5ca0-account-create-update-rpvdg\" (UID: \"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278\") " pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.903587 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqt5j\" (UniqueName: \"kubernetes.io/projected/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-kube-api-access-nqt5j\") pod \"nova-cell0-5ca0-account-create-update-rpvdg\" (UID: \"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278\") " pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.950560 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cc6a-account-create-update-4dcck"] Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.952225 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.965536 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 16:31:10 crc kubenswrapper[4895]: I0129 16:31:10.985380 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cc6a-account-create-update-4dcck"] Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.057693 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.088659 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d6c2e3-3343-463d-b1bf-096a5a7b5108-operator-scripts\") pod \"nova-cell1-cc6a-account-create-update-4dcck\" (UID: \"02d6c2e3-3343-463d-b1bf-096a5a7b5108\") " pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.088751 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htr2l\" (UniqueName: \"kubernetes.io/projected/02d6c2e3-3343-463d-b1bf-096a5a7b5108-kube-api-access-htr2l\") pod \"nova-cell1-cc6a-account-create-update-4dcck\" (UID: \"02d6c2e3-3343-463d-b1bf-096a5a7b5108\") " pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.198784 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d6c2e3-3343-463d-b1bf-096a5a7b5108-operator-scripts\") pod \"nova-cell1-cc6a-account-create-update-4dcck\" (UID: \"02d6c2e3-3343-463d-b1bf-096a5a7b5108\") " pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.199392 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htr2l\" (UniqueName: \"kubernetes.io/projected/02d6c2e3-3343-463d-b1bf-096a5a7b5108-kube-api-access-htr2l\") pod \"nova-cell1-cc6a-account-create-update-4dcck\" (UID: \"02d6c2e3-3343-463d-b1bf-096a5a7b5108\") " pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.204393 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d6c2e3-3343-463d-b1bf-096a5a7b5108-operator-scripts\") pod \"nova-cell1-cc6a-account-create-update-4dcck\" (UID: \"02d6c2e3-3343-463d-b1bf-096a5a7b5108\") " pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.233324 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htr2l\" (UniqueName: \"kubernetes.io/projected/02d6c2e3-3343-463d-b1bf-096a5a7b5108-kube-api-access-htr2l\") pod \"nova-cell1-cc6a-account-create-update-4dcck\" (UID: \"02d6c2e3-3343-463d-b1bf-096a5a7b5108\") " pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.319430 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.333019 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-h9tc2"] Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.545532 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-922d-account-create-update-r88cx"] Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.588975 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xdhz7"] Jan 29 16:31:11 crc kubenswrapper[4895]: W0129 16:31:11.604505 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c2d8de9_c774_4b94_8bab_0ba6b70dde52.slice/crio-dfaa845dce6b5cc8ff37b2e784b0e4fae4ca9c1a4586528d45cd9cc9ba3b96b8 WatchSource:0}: Error finding container dfaa845dce6b5cc8ff37b2e784b0e4fae4ca9c1a4586528d45cd9cc9ba3b96b8: Status 404 returned error can't find the container with id dfaa845dce6b5cc8ff37b2e784b0e4fae4ca9c1a4586528d45cd9cc9ba3b96b8 Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.614343 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf6f1e04-8de8-41c4-816a-b2293ca9886e","Type":"ContainerStarted","Data":"d1c916d85cef594a2dab43e5544df892a3db4df8bcff0ba75395669a34fa7e3a"} Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.617437 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-h9tc2" event={"ID":"bce84108-fcb8-4d23-8575-95458b165761","Type":"ContainerStarted","Data":"34b2807fb1e02285ac7fa82189654422e04a35c1ac68d072a4797505450797af"} Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.666489 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.666462987 podStartE2EDuration="4.666462987s" podCreationTimestamp="2026-01-29 16:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:11.63857505 +0000 UTC m=+1155.441552334" watchObservedRunningTime="2026-01-29 16:31:11.666462987 +0000 UTC m=+1155.469440251" Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.678152 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sb5kw"] Jan 29 16:31:11 crc kubenswrapper[4895]: I0129 16:31:11.930483 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5ca0-account-create-update-rpvdg"] Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.131821 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cc6a-account-create-update-4dcck"] Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.637855 4895 generic.go:334] "Generic (PLEG): container finished" podID="bce84108-fcb8-4d23-8575-95458b165761" containerID="11bf490bb5a899a45bacba94b61047fe1def78fd30a8fee4035b5a5b459698b0" exitCode=0 Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.637915 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-h9tc2" event={"ID":"bce84108-fcb8-4d23-8575-95458b165761","Type":"ContainerDied","Data":"11bf490bb5a899a45bacba94b61047fe1def78fd30a8fee4035b5a5b459698b0"} Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.641390 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" event={"ID":"02d6c2e3-3343-463d-b1bf-096a5a7b5108","Type":"ContainerStarted","Data":"9a0beeaf50f23e3987d88c9287c6434e6cf1eb4e578eafe6d1abd2f69f373769"} Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.641444 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" event={"ID":"02d6c2e3-3343-463d-b1bf-096a5a7b5108","Type":"ContainerStarted","Data":"b290bad3d841fa09cdb85b9a3f49ca363c57142e10bfeb682874108df560fafe"} Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.663543 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" event={"ID":"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278","Type":"ContainerStarted","Data":"1919c85f4c8b420396935fabcb6a0ce3bd7dbb180ce7231267d960897ab20c45"} Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.663623 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" event={"ID":"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278","Type":"ContainerStarted","Data":"9477b315ca22effd4dd483eb7dbf8874f9d54181b40487ee2688cb8142154a45"} Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.674653 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-922d-account-create-update-r88cx" event={"ID":"b3707acb-376a-4821-b6cb-6f1e0a617931","Type":"ContainerStarted","Data":"acc8eb58be376a4d3fcc57bb807fb6038e855125c8717fd23fcf1d1abfa6b615"} Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.674729 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-922d-account-create-update-r88cx" event={"ID":"b3707acb-376a-4821-b6cb-6f1e0a617931","Type":"ContainerStarted","Data":"f138054ed1ffb17bd3734a069dedb58808109c2772911fd93f3b5688cad94012"} Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.698802 4895 generic.go:334] "Generic (PLEG): container finished" podID="0c2d8de9-c774-4b94-8bab-0ba6b70dde52" containerID="dd3b74b9375bc6120a1fd38e24b9127aeebdb5398ba001065badc08cc5ba3188" exitCode=0 Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.699192 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xdhz7" event={"ID":"0c2d8de9-c774-4b94-8bab-0ba6b70dde52","Type":"ContainerDied","Data":"dd3b74b9375bc6120a1fd38e24b9127aeebdb5398ba001065badc08cc5ba3188"} Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.700282 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xdhz7" event={"ID":"0c2d8de9-c774-4b94-8bab-0ba6b70dde52","Type":"ContainerStarted","Data":"dfaa845dce6b5cc8ff37b2e784b0e4fae4ca9c1a4586528d45cd9cc9ba3b96b8"} Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.717474 4895 generic.go:334] "Generic (PLEG): container finished" podID="35221a68-d9d1-4630-8ade-81eca4fb1a57" containerID="9046ee3b885bd70a3e4f41768dbbfaf18578f4df0f31b545d1f61793983a30aa" exitCode=0 Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.717716 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sb5kw" event={"ID":"35221a68-d9d1-4630-8ade-81eca4fb1a57","Type":"ContainerDied","Data":"9046ee3b885bd70a3e4f41768dbbfaf18578f4df0f31b545d1f61793983a30aa"} Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.717769 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sb5kw" event={"ID":"35221a68-d9d1-4630-8ade-81eca4fb1a57","Type":"ContainerStarted","Data":"e0c1938bb57a2f314d48052689cd13015c6b5a095a366241ca292a8a7cfe4027"} Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.755338 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" podStartSLOduration=2.755305201 podStartE2EDuration="2.755305201s" podCreationTimestamp="2026-01-29 16:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:12.71953272 +0000 UTC m=+1156.522509994" watchObservedRunningTime="2026-01-29 16:31:12.755305201 +0000 UTC m=+1156.558282475" Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.766778 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-922d-account-create-update-r88cx" podStartSLOduration=2.766750221 podStartE2EDuration="2.766750221s" podCreationTimestamp="2026-01-29 16:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:12.742443351 +0000 UTC m=+1156.545420625" watchObservedRunningTime="2026-01-29 16:31:12.766750221 +0000 UTC m=+1156.569727495" Jan 29 16:31:12 crc kubenswrapper[4895]: I0129 16:31:12.777691 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" podStartSLOduration=2.777665556 podStartE2EDuration="2.777665556s" podCreationTimestamp="2026-01-29 16:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:12.761337794 +0000 UTC m=+1156.564315068" watchObservedRunningTime="2026-01-29 16:31:12.777665556 +0000 UTC m=+1156.580642840" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.077616 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.231811 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.280617 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-httpd-config\") pod \"776df2d4-3174-4c65-9966-658b00bc63fa\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.280709 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4nm4\" (UniqueName: \"kubernetes.io/projected/776df2d4-3174-4c65-9966-658b00bc63fa-kube-api-access-b4nm4\") pod \"776df2d4-3174-4c65-9966-658b00bc63fa\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.280773 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-combined-ca-bundle\") pod \"776df2d4-3174-4c65-9966-658b00bc63fa\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.280823 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-ovndb-tls-certs\") pod \"776df2d4-3174-4c65-9966-658b00bc63fa\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.280900 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-config\") pod \"776df2d4-3174-4c65-9966-658b00bc63fa\" (UID: \"776df2d4-3174-4c65-9966-658b00bc63fa\") " Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.302634 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "776df2d4-3174-4c65-9966-658b00bc63fa" (UID: "776df2d4-3174-4c65-9966-658b00bc63fa"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.319277 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/776df2d4-3174-4c65-9966-658b00bc63fa-kube-api-access-b4nm4" (OuterVolumeSpecName: "kube-api-access-b4nm4") pod "776df2d4-3174-4c65-9966-658b00bc63fa" (UID: "776df2d4-3174-4c65-9966-658b00bc63fa"). InnerVolumeSpecName "kube-api-access-b4nm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.359268 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-config" (OuterVolumeSpecName: "config") pod "776df2d4-3174-4c65-9966-658b00bc63fa" (UID: "776df2d4-3174-4c65-9966-658b00bc63fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.365294 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "776df2d4-3174-4c65-9966-658b00bc63fa" (UID: "776df2d4-3174-4c65-9966-658b00bc63fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.366089 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "776df2d4-3174-4c65-9966-658b00bc63fa" (UID: "776df2d4-3174-4c65-9966-658b00bc63fa"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.383677 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.383729 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4nm4\" (UniqueName: \"kubernetes.io/projected/776df2d4-3174-4c65-9966-658b00bc63fa-kube-api-access-b4nm4\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.383746 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.383758 4895 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.383771 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/776df2d4-3174-4c65-9966-658b00bc63fa-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.731638 4895 generic.go:334] "Generic (PLEG): container finished" podID="02d6c2e3-3343-463d-b1bf-096a5a7b5108" containerID="9a0beeaf50f23e3987d88c9287c6434e6cf1eb4e578eafe6d1abd2f69f373769" exitCode=0 Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.731712 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" event={"ID":"02d6c2e3-3343-463d-b1bf-096a5a7b5108","Type":"ContainerDied","Data":"9a0beeaf50f23e3987d88c9287c6434e6cf1eb4e578eafe6d1abd2f69f373769"} Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.734609 4895 generic.go:334] "Generic (PLEG): container finished" podID="d250ad9a-db3b-4e4b-a5f8-1b4ab945c278" containerID="1919c85f4c8b420396935fabcb6a0ce3bd7dbb180ce7231267d960897ab20c45" exitCode=0 Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.734704 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" event={"ID":"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278","Type":"ContainerDied","Data":"1919c85f4c8b420396935fabcb6a0ce3bd7dbb180ce7231267d960897ab20c45"} Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.737242 4895 generic.go:334] "Generic (PLEG): container finished" podID="b3707acb-376a-4821-b6cb-6f1e0a617931" containerID="acc8eb58be376a4d3fcc57bb807fb6038e855125c8717fd23fcf1d1abfa6b615" exitCode=0 Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.737333 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-922d-account-create-update-r88cx" event={"ID":"b3707acb-376a-4821-b6cb-6f1e0a617931","Type":"ContainerDied","Data":"acc8eb58be376a4d3fcc57bb807fb6038e855125c8717fd23fcf1d1abfa6b615"} Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.742704 4895 generic.go:334] "Generic (PLEG): container finished" podID="776df2d4-3174-4c65-9966-658b00bc63fa" containerID="59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8" exitCode=0 Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.742763 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77cbdff676-qn2gg" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.742790 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbdff676-qn2gg" event={"ID":"776df2d4-3174-4c65-9966-658b00bc63fa","Type":"ContainerDied","Data":"59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8"} Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.743010 4895 scope.go:117] "RemoveContainer" containerID="b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.742858 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbdff676-qn2gg" event={"ID":"776df2d4-3174-4c65-9966-658b00bc63fa","Type":"ContainerDied","Data":"1c3361369e1b0929b0be94e19a539fc53746816169ced2cbea6fdc781bef7fa1"} Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.790140 4895 scope.go:117] "RemoveContainer" containerID="59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.845808 4895 scope.go:117] "RemoveContainer" containerID="b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7" Jan 29 16:31:13 crc kubenswrapper[4895]: E0129 16:31:13.846534 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7\": container with ID starting with b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7 not found: ID does not exist" containerID="b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.846609 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7"} err="failed to get container status \"b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7\": rpc error: code = NotFound desc = could not find container \"b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7\": container with ID starting with b7780ca0cd3b4172fb143dba45d34423411ba735203527e4ef3ce34a47bd3ea7 not found: ID does not exist" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.846652 4895 scope.go:117] "RemoveContainer" containerID="59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8" Jan 29 16:31:13 crc kubenswrapper[4895]: E0129 16:31:13.847290 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8\": container with ID starting with 59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8 not found: ID does not exist" containerID="59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.847319 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8"} err="failed to get container status \"59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8\": rpc error: code = NotFound desc = could not find container \"59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8\": container with ID starting with 59b280e4a82e126039ddf08d0b81492200c685f78ebbdb9dd8411e5fcd4090c8 not found: ID does not exist" Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.850852 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77cbdff676-qn2gg"] Jan 29 16:31:13 crc kubenswrapper[4895]: I0129 16:31:13.867882 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-77cbdff676-qn2gg"] Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.085758 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sb5kw" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.198045 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35221a68-d9d1-4630-8ade-81eca4fb1a57-operator-scripts\") pod \"35221a68-d9d1-4630-8ade-81eca4fb1a57\" (UID: \"35221a68-d9d1-4630-8ade-81eca4fb1a57\") " Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.198276 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6fw4\" (UniqueName: \"kubernetes.io/projected/35221a68-d9d1-4630-8ade-81eca4fb1a57-kube-api-access-h6fw4\") pod \"35221a68-d9d1-4630-8ade-81eca4fb1a57\" (UID: \"35221a68-d9d1-4630-8ade-81eca4fb1a57\") " Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.200397 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35221a68-d9d1-4630-8ade-81eca4fb1a57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35221a68-d9d1-4630-8ade-81eca4fb1a57" (UID: "35221a68-d9d1-4630-8ade-81eca4fb1a57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.206263 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35221a68-d9d1-4630-8ade-81eca4fb1a57-kube-api-access-h6fw4" (OuterVolumeSpecName: "kube-api-access-h6fw4") pod "35221a68-d9d1-4630-8ade-81eca4fb1a57" (UID: "35221a68-d9d1-4630-8ade-81eca4fb1a57"). InnerVolumeSpecName "kube-api-access-h6fw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.289062 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xdhz7" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.300417 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6fw4\" (UniqueName: \"kubernetes.io/projected/35221a68-d9d1-4630-8ade-81eca4fb1a57-kube-api-access-h6fw4\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.300455 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35221a68-d9d1-4630-8ade-81eca4fb1a57-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.300927 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h9tc2" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.401681 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x97d\" (UniqueName: \"kubernetes.io/projected/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-kube-api-access-4x97d\") pod \"0c2d8de9-c774-4b94-8bab-0ba6b70dde52\" (UID: \"0c2d8de9-c774-4b94-8bab-0ba6b70dde52\") " Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.401836 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4qz4\" (UniqueName: \"kubernetes.io/projected/bce84108-fcb8-4d23-8575-95458b165761-kube-api-access-z4qz4\") pod \"bce84108-fcb8-4d23-8575-95458b165761\" (UID: \"bce84108-fcb8-4d23-8575-95458b165761\") " Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.401896 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bce84108-fcb8-4d23-8575-95458b165761-operator-scripts\") pod \"bce84108-fcb8-4d23-8575-95458b165761\" (UID: \"bce84108-fcb8-4d23-8575-95458b165761\") " Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.402086 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-operator-scripts\") pod \"0c2d8de9-c774-4b94-8bab-0ba6b70dde52\" (UID: \"0c2d8de9-c774-4b94-8bab-0ba6b70dde52\") " Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.402785 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c2d8de9-c774-4b94-8bab-0ba6b70dde52" (UID: "0c2d8de9-c774-4b94-8bab-0ba6b70dde52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.402848 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bce84108-fcb8-4d23-8575-95458b165761-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bce84108-fcb8-4d23-8575-95458b165761" (UID: "bce84108-fcb8-4d23-8575-95458b165761"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.411815 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce84108-fcb8-4d23-8575-95458b165761-kube-api-access-z4qz4" (OuterVolumeSpecName: "kube-api-access-z4qz4") pod "bce84108-fcb8-4d23-8575-95458b165761" (UID: "bce84108-fcb8-4d23-8575-95458b165761"). InnerVolumeSpecName "kube-api-access-z4qz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.412029 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-kube-api-access-4x97d" (OuterVolumeSpecName: "kube-api-access-4x97d") pod "0c2d8de9-c774-4b94-8bab-0ba6b70dde52" (UID: "0c2d8de9-c774-4b94-8bab-0ba6b70dde52"). InnerVolumeSpecName "kube-api-access-4x97d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.504284 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.504343 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x97d\" (UniqueName: \"kubernetes.io/projected/0c2d8de9-c774-4b94-8bab-0ba6b70dde52-kube-api-access-4x97d\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.504361 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4qz4\" (UniqueName: \"kubernetes.io/projected/bce84108-fcb8-4d23-8575-95458b165761-kube-api-access-z4qz4\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.504375 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bce84108-fcb8-4d23-8575-95458b165761-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.759112 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xdhz7" event={"ID":"0c2d8de9-c774-4b94-8bab-0ba6b70dde52","Type":"ContainerDied","Data":"dfaa845dce6b5cc8ff37b2e784b0e4fae4ca9c1a4586528d45cd9cc9ba3b96b8"} Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.759158 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfaa845dce6b5cc8ff37b2e784b0e4fae4ca9c1a4586528d45cd9cc9ba3b96b8" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.759217 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xdhz7" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.772199 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sb5kw" event={"ID":"35221a68-d9d1-4630-8ade-81eca4fb1a57","Type":"ContainerDied","Data":"e0c1938bb57a2f314d48052689cd13015c6b5a095a366241ca292a8a7cfe4027"} Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.772254 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0c1938bb57a2f314d48052689cd13015c6b5a095a366241ca292a8a7cfe4027" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.772396 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sb5kw" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.774929 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h9tc2" Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.775604 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-h9tc2" event={"ID":"bce84108-fcb8-4d23-8575-95458b165761","Type":"ContainerDied","Data":"34b2807fb1e02285ac7fa82189654422e04a35c1ac68d072a4797505450797af"} Jan 29 16:31:14 crc kubenswrapper[4895]: I0129 16:31:14.775664 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34b2807fb1e02285ac7fa82189654422e04a35c1ac68d072a4797505450797af" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.053585 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="776df2d4-3174-4c65-9966-658b00bc63fa" path="/var/lib/kubelet/pods/776df2d4-3174-4c65-9966-658b00bc63fa/volumes" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.219406 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-922d-account-create-update-r88cx" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.339789 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3707acb-376a-4821-b6cb-6f1e0a617931-operator-scripts\") pod \"b3707acb-376a-4821-b6cb-6f1e0a617931\" (UID: \"b3707acb-376a-4821-b6cb-6f1e0a617931\") " Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.340273 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd22v\" (UniqueName: \"kubernetes.io/projected/b3707acb-376a-4821-b6cb-6f1e0a617931-kube-api-access-bd22v\") pod \"b3707acb-376a-4821-b6cb-6f1e0a617931\" (UID: \"b3707acb-376a-4821-b6cb-6f1e0a617931\") " Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.341637 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3707acb-376a-4821-b6cb-6f1e0a617931-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3707acb-376a-4821-b6cb-6f1e0a617931" (UID: "b3707acb-376a-4821-b6cb-6f1e0a617931"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.344690 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.346153 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.348509 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3707acb-376a-4821-b6cb-6f1e0a617931-kube-api-access-bd22v" (OuterVolumeSpecName: "kube-api-access-bd22v") pod "b3707acb-376a-4821-b6cb-6f1e0a617931" (UID: "b3707acb-376a-4821-b6cb-6f1e0a617931"). InnerVolumeSpecName "kube-api-access-bd22v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.447468 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-operator-scripts\") pod \"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278\" (UID: \"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278\") " Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.447535 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htr2l\" (UniqueName: \"kubernetes.io/projected/02d6c2e3-3343-463d-b1bf-096a5a7b5108-kube-api-access-htr2l\") pod \"02d6c2e3-3343-463d-b1bf-096a5a7b5108\" (UID: \"02d6c2e3-3343-463d-b1bf-096a5a7b5108\") " Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.447576 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d6c2e3-3343-463d-b1bf-096a5a7b5108-operator-scripts\") pod \"02d6c2e3-3343-463d-b1bf-096a5a7b5108\" (UID: \"02d6c2e3-3343-463d-b1bf-096a5a7b5108\") " Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.447620 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqt5j\" (UniqueName: \"kubernetes.io/projected/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-kube-api-access-nqt5j\") pod \"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278\" (UID: \"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278\") " Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.448264 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd22v\" (UniqueName: \"kubernetes.io/projected/b3707acb-376a-4821-b6cb-6f1e0a617931-kube-api-access-bd22v\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.448284 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3707acb-376a-4821-b6cb-6f1e0a617931-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.448596 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d6c2e3-3343-463d-b1bf-096a5a7b5108-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02d6c2e3-3343-463d-b1bf-096a5a7b5108" (UID: "02d6c2e3-3343-463d-b1bf-096a5a7b5108"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.448624 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d250ad9a-db3b-4e4b-a5f8-1b4ab945c278" (UID: "d250ad9a-db3b-4e4b-a5f8-1b4ab945c278"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.455339 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d6c2e3-3343-463d-b1bf-096a5a7b5108-kube-api-access-htr2l" (OuterVolumeSpecName: "kube-api-access-htr2l") pod "02d6c2e3-3343-463d-b1bf-096a5a7b5108" (UID: "02d6c2e3-3343-463d-b1bf-096a5a7b5108"). InnerVolumeSpecName "kube-api-access-htr2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.462403 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-kube-api-access-nqt5j" (OuterVolumeSpecName: "kube-api-access-nqt5j") pod "d250ad9a-db3b-4e4b-a5f8-1b4ab945c278" (UID: "d250ad9a-db3b-4e4b-a5f8-1b4ab945c278"). InnerVolumeSpecName "kube-api-access-nqt5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.552414 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.552459 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htr2l\" (UniqueName: \"kubernetes.io/projected/02d6c2e3-3343-463d-b1bf-096a5a7b5108-kube-api-access-htr2l\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.552471 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d6c2e3-3343-463d-b1bf-096a5a7b5108-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.552481 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqt5j\" (UniqueName: \"kubernetes.io/projected/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278-kube-api-access-nqt5j\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.789118 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-922d-account-create-update-r88cx" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.789956 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-922d-account-create-update-r88cx" event={"ID":"b3707acb-376a-4821-b6cb-6f1e0a617931","Type":"ContainerDied","Data":"f138054ed1ffb17bd3734a069dedb58808109c2772911fd93f3b5688cad94012"} Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.789995 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f138054ed1ffb17bd3734a069dedb58808109c2772911fd93f3b5688cad94012" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.792226 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" event={"ID":"02d6c2e3-3343-463d-b1bf-096a5a7b5108","Type":"ContainerDied","Data":"b290bad3d841fa09cdb85b9a3f49ca363c57142e10bfeb682874108df560fafe"} Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.792255 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b290bad3d841fa09cdb85b9a3f49ca363c57142e10bfeb682874108df560fafe" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.792291 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cc6a-account-create-update-4dcck" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.795165 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" event={"ID":"d250ad9a-db3b-4e4b-a5f8-1b4ab945c278","Type":"ContainerDied","Data":"9477b315ca22effd4dd483eb7dbf8874f9d54181b40487ee2688cb8142154a45"} Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.795223 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9477b315ca22effd4dd483eb7dbf8874f9d54181b40487ee2688cb8142154a45" Jan 29 16:31:15 crc kubenswrapper[4895]: I0129 16:31:15.795194 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ca0-account-create-update-rpvdg" Jan 29 16:31:18 crc kubenswrapper[4895]: I0129 16:31:18.329434 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.771678 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k74gb"] Jan 29 16:31:20 crc kubenswrapper[4895]: E0129 16:31:20.772693 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35221a68-d9d1-4630-8ade-81eca4fb1a57" containerName="mariadb-database-create" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.772715 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="35221a68-d9d1-4630-8ade-81eca4fb1a57" containerName="mariadb-database-create" Jan 29 16:31:20 crc kubenswrapper[4895]: E0129 16:31:20.772729 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776df2d4-3174-4c65-9966-658b00bc63fa" containerName="neutron-httpd" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.772738 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="776df2d4-3174-4c65-9966-658b00bc63fa" containerName="neutron-httpd" Jan 29 16:31:20 crc kubenswrapper[4895]: E0129 16:31:20.772755 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce84108-fcb8-4d23-8575-95458b165761" containerName="mariadb-database-create" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.772764 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce84108-fcb8-4d23-8575-95458b165761" containerName="mariadb-database-create" Jan 29 16:31:20 crc kubenswrapper[4895]: E0129 16:31:20.772788 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3707acb-376a-4821-b6cb-6f1e0a617931" containerName="mariadb-account-create-update" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.772797 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3707acb-376a-4821-b6cb-6f1e0a617931" containerName="mariadb-account-create-update" Jan 29 16:31:20 crc kubenswrapper[4895]: E0129 16:31:20.772809 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776df2d4-3174-4c65-9966-658b00bc63fa" containerName="neutron-api" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.772816 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="776df2d4-3174-4c65-9966-658b00bc63fa" containerName="neutron-api" Jan 29 16:31:20 crc kubenswrapper[4895]: E0129 16:31:20.772834 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d250ad9a-db3b-4e4b-a5f8-1b4ab945c278" containerName="mariadb-account-create-update" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.772842 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d250ad9a-db3b-4e4b-a5f8-1b4ab945c278" containerName="mariadb-account-create-update" Jan 29 16:31:20 crc kubenswrapper[4895]: E0129 16:31:20.772854 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d6c2e3-3343-463d-b1bf-096a5a7b5108" containerName="mariadb-account-create-update" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.772881 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d6c2e3-3343-463d-b1bf-096a5a7b5108" containerName="mariadb-account-create-update" Jan 29 16:31:20 crc kubenswrapper[4895]: E0129 16:31:20.772909 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2d8de9-c774-4b94-8bab-0ba6b70dde52" containerName="mariadb-database-create" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.772917 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2d8de9-c774-4b94-8bab-0ba6b70dde52" containerName="mariadb-database-create" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.773122 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce84108-fcb8-4d23-8575-95458b165761" containerName="mariadb-database-create" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.773139 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3707acb-376a-4821-b6cb-6f1e0a617931" containerName="mariadb-account-create-update" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.773153 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d6c2e3-3343-463d-b1bf-096a5a7b5108" containerName="mariadb-account-create-update" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.773165 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2d8de9-c774-4b94-8bab-0ba6b70dde52" containerName="mariadb-database-create" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.773183 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="776df2d4-3174-4c65-9966-658b00bc63fa" containerName="neutron-api" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.773193 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="35221a68-d9d1-4630-8ade-81eca4fb1a57" containerName="mariadb-database-create" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.773207 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="776df2d4-3174-4c65-9966-658b00bc63fa" containerName="neutron-httpd" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.773221 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d250ad9a-db3b-4e4b-a5f8-1b4ab945c278" containerName="mariadb-account-create-update" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.774062 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.778317 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-558n5" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.778645 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.778826 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.793709 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k74gb"] Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.879749 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-scripts\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.879992 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pbkh\" (UniqueName: \"kubernetes.io/projected/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-kube-api-access-2pbkh\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.880187 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.880238 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-config-data\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.982364 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pbkh\" (UniqueName: \"kubernetes.io/projected/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-kube-api-access-2pbkh\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.982502 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.982546 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-config-data\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.982654 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-scripts\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.991191 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-scripts\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:20 crc kubenswrapper[4895]: I0129 16:31:20.991324 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:21 crc kubenswrapper[4895]: I0129 16:31:21.003760 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pbkh\" (UniqueName: \"kubernetes.io/projected/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-kube-api-access-2pbkh\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:21 crc kubenswrapper[4895]: I0129 16:31:21.005531 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-config-data\") pod \"nova-cell0-conductor-db-sync-k74gb\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:21 crc kubenswrapper[4895]: I0129 16:31:21.102929 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:24 crc kubenswrapper[4895]: I0129 16:31:24.941031 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k74gb"] Jan 29 16:31:25 crc kubenswrapper[4895]: I0129 16:31:25.978032 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:25 crc kubenswrapper[4895]: I0129 16:31:25.981352 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="ceilometer-central-agent" containerID="cri-o://b9fa803e3d4c42cf9e03a8f27fd12bf27796d201742881dfb7f6ed8aeb6998e5" gracePeriod=30 Jan 29 16:31:25 crc kubenswrapper[4895]: I0129 16:31:25.982450 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="proxy-httpd" containerID="cri-o://fa24060cf4dbf13e67c4c3a11768e21e58d6fef85c653e0b3348943c9b082f6f" gracePeriod=30 Jan 29 16:31:25 crc kubenswrapper[4895]: I0129 16:31:25.982585 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="ceilometer-notification-agent" containerID="cri-o://1f4a20ba9633695a4901428142322837d95b0c1f44f62ff5a276fbf21c2accf4" gracePeriod=30 Jan 29 16:31:25 crc kubenswrapper[4895]: I0129 16:31:25.982639 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="sg-core" containerID="cri-o://20be649c4a0ab5c5af769994c74bc4f918b8f4f6dc2bc2bf975a8d473dbd06ad" gracePeriod=30 Jan 29 16:31:25 crc kubenswrapper[4895]: W0129 16:31:25.989881 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod066caef9_34c4_40a1_b7d4_cdfb48c02fe4.slice/crio-1bbf1eb5bb4470056aa671a313768f062d59e6c74b8eaccd560bb15325222c7d WatchSource:0}: Error finding container 1bbf1eb5bb4470056aa671a313768f062d59e6c74b8eaccd560bb15325222c7d: Status 404 returned error can't find the container with id 1bbf1eb5bb4470056aa671a313768f062d59e6c74b8eaccd560bb15325222c7d Jan 29 16:31:26 crc kubenswrapper[4895]: I0129 16:31:26.001526 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 16:31:26 crc kubenswrapper[4895]: I0129 16:31:26.987137 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k74gb" event={"ID":"066caef9-34c4-40a1-b7d4-cdfb48c02fe4","Type":"ContainerStarted","Data":"1bbf1eb5bb4470056aa671a313768f062d59e6c74b8eaccd560bb15325222c7d"} Jan 29 16:31:26 crc kubenswrapper[4895]: I0129 16:31:26.991985 4895 generic.go:334] "Generic (PLEG): container finished" podID="08ccd04e-f148-46a4-88aa-b488fa132756" containerID="fa24060cf4dbf13e67c4c3a11768e21e58d6fef85c653e0b3348943c9b082f6f" exitCode=0 Jan 29 16:31:26 crc kubenswrapper[4895]: I0129 16:31:26.992024 4895 generic.go:334] "Generic (PLEG): container finished" podID="08ccd04e-f148-46a4-88aa-b488fa132756" containerID="20be649c4a0ab5c5af769994c74bc4f918b8f4f6dc2bc2bf975a8d473dbd06ad" exitCode=2 Jan 29 16:31:26 crc kubenswrapper[4895]: I0129 16:31:26.992032 4895 generic.go:334] "Generic (PLEG): container finished" podID="08ccd04e-f148-46a4-88aa-b488fa132756" containerID="b9fa803e3d4c42cf9e03a8f27fd12bf27796d201742881dfb7f6ed8aeb6998e5" exitCode=0 Jan 29 16:31:26 crc kubenswrapper[4895]: I0129 16:31:26.992075 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08ccd04e-f148-46a4-88aa-b488fa132756","Type":"ContainerDied","Data":"fa24060cf4dbf13e67c4c3a11768e21e58d6fef85c653e0b3348943c9b082f6f"} Jan 29 16:31:26 crc kubenswrapper[4895]: I0129 16:31:26.992129 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08ccd04e-f148-46a4-88aa-b488fa132756","Type":"ContainerDied","Data":"20be649c4a0ab5c5af769994c74bc4f918b8f4f6dc2bc2bf975a8d473dbd06ad"} Jan 29 16:31:26 crc kubenswrapper[4895]: I0129 16:31:26.992142 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08ccd04e-f148-46a4-88aa-b488fa132756","Type":"ContainerDied","Data":"b9fa803e3d4c42cf9e03a8f27fd12bf27796d201742881dfb7f6ed8aeb6998e5"} Jan 29 16:31:26 crc kubenswrapper[4895]: I0129 16:31:26.995287 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d50b4deb-a738-4cac-9481-b4085086c116","Type":"ContainerStarted","Data":"89a7b070ffbc9f544badc95a0a22665ddb84cffa8830dcab307f7a039495a6ff"} Jan 29 16:31:27 crc kubenswrapper[4895]: I0129 16:31:27.017116 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.536543396 podStartE2EDuration="23.01708831s" podCreationTimestamp="2026-01-29 16:31:04 +0000 UTC" firstStartedPulling="2026-01-29 16:31:07.603060671 +0000 UTC m=+1151.406037935" lastFinishedPulling="2026-01-29 16:31:26.083605595 +0000 UTC m=+1169.886582849" observedRunningTime="2026-01-29 16:31:27.013340859 +0000 UTC m=+1170.816318123" watchObservedRunningTime="2026-01-29 16:31:27.01708831 +0000 UTC m=+1170.820065594" Jan 29 16:31:32 crc kubenswrapper[4895]: I0129 16:31:32.058858 4895 generic.go:334] "Generic (PLEG): container finished" podID="08ccd04e-f148-46a4-88aa-b488fa132756" containerID="1f4a20ba9633695a4901428142322837d95b0c1f44f62ff5a276fbf21c2accf4" exitCode=0 Jan 29 16:31:32 crc kubenswrapper[4895]: I0129 16:31:32.059010 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08ccd04e-f148-46a4-88aa-b488fa132756","Type":"ContainerDied","Data":"1f4a20ba9633695a4901428142322837d95b0c1f44f62ff5a276fbf21c2accf4"} Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.119119 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.146:3000/\": dial tcp 10.217.0.146:3000: connect: connection refused" Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.862256 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.978680 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-combined-ca-bundle\") pod \"08ccd04e-f148-46a4-88aa-b488fa132756\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.978732 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-run-httpd\") pod \"08ccd04e-f148-46a4-88aa-b488fa132756\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.978794 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6djj\" (UniqueName: \"kubernetes.io/projected/08ccd04e-f148-46a4-88aa-b488fa132756-kube-api-access-m6djj\") pod \"08ccd04e-f148-46a4-88aa-b488fa132756\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.978823 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-config-data\") pod \"08ccd04e-f148-46a4-88aa-b488fa132756\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.978907 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-scripts\") pod \"08ccd04e-f148-46a4-88aa-b488fa132756\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.979016 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-sg-core-conf-yaml\") pod \"08ccd04e-f148-46a4-88aa-b488fa132756\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.979137 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-log-httpd\") pod \"08ccd04e-f148-46a4-88aa-b488fa132756\" (UID: \"08ccd04e-f148-46a4-88aa-b488fa132756\") " Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.980678 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "08ccd04e-f148-46a4-88aa-b488fa132756" (UID: "08ccd04e-f148-46a4-88aa-b488fa132756"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.982353 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "08ccd04e-f148-46a4-88aa-b488fa132756" (UID: "08ccd04e-f148-46a4-88aa-b488fa132756"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.985163 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ccd04e-f148-46a4-88aa-b488fa132756-kube-api-access-m6djj" (OuterVolumeSpecName: "kube-api-access-m6djj") pod "08ccd04e-f148-46a4-88aa-b488fa132756" (UID: "08ccd04e-f148-46a4-88aa-b488fa132756"). InnerVolumeSpecName "kube-api-access-m6djj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:33 crc kubenswrapper[4895]: I0129 16:31:33.996123 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-scripts" (OuterVolumeSpecName: "scripts") pod "08ccd04e-f148-46a4-88aa-b488fa132756" (UID: "08ccd04e-f148-46a4-88aa-b488fa132756"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.030054 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "08ccd04e-f148-46a4-88aa-b488fa132756" (UID: "08ccd04e-f148-46a4-88aa-b488fa132756"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.036318 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.077807 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f656cd776-2tcds" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.081511 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.081536 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.081546 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.081556 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08ccd04e-f148-46a4-88aa-b488fa132756-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.081564 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6djj\" (UniqueName: \"kubernetes.io/projected/08ccd04e-f148-46a4-88aa-b488fa132756-kube-api-access-m6djj\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.085665 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08ccd04e-f148-46a4-88aa-b488fa132756","Type":"ContainerDied","Data":"8287769d17f192ca4c2ffa30bfababfd0c9e4a4cb7c850ce8b611d9bcc3ad4c8"} Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.085719 4895 scope.go:117] "RemoveContainer" containerID="fa24060cf4dbf13e67c4c3a11768e21e58d6fef85c653e0b3348943c9b082f6f" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.085909 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.129734 4895 scope.go:117] "RemoveContainer" containerID="20be649c4a0ab5c5af769994c74bc4f918b8f4f6dc2bc2bf975a8d473dbd06ad" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.146070 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08ccd04e-f148-46a4-88aa-b488fa132756" (UID: "08ccd04e-f148-46a4-88aa-b488fa132756"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.174241 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-config-data" (OuterVolumeSpecName: "config-data") pod "08ccd04e-f148-46a4-88aa-b488fa132756" (UID: "08ccd04e-f148-46a4-88aa-b488fa132756"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.177338 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-84bb8677c6-sfxjh"] Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.177686 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-84bb8677c6-sfxjh" podUID="c992c27a-1165-4c22-99e3-67bee151dfb4" containerName="placement-log" containerID="cri-o://5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b" gracePeriod=30 Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.178300 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-84bb8677c6-sfxjh" podUID="c992c27a-1165-4c22-99e3-67bee151dfb4" containerName="placement-api" containerID="cri-o://73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0" gracePeriod=30 Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.185130 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.185167 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ccd04e-f148-46a4-88aa-b488fa132756-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.257670 4895 scope.go:117] "RemoveContainer" containerID="1f4a20ba9633695a4901428142322837d95b0c1f44f62ff5a276fbf21c2accf4" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.329655 4895 scope.go:117] "RemoveContainer" containerID="b9fa803e3d4c42cf9e03a8f27fd12bf27796d201742881dfb7f6ed8aeb6998e5" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.429541 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.439775 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.465083 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:34 crc kubenswrapper[4895]: E0129 16:31:34.465508 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="proxy-httpd" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.465526 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="proxy-httpd" Jan 29 16:31:34 crc kubenswrapper[4895]: E0129 16:31:34.465564 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="sg-core" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.465571 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="sg-core" Jan 29 16:31:34 crc kubenswrapper[4895]: E0129 16:31:34.465585 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="ceilometer-notification-agent" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.465591 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="ceilometer-notification-agent" Jan 29 16:31:34 crc kubenswrapper[4895]: E0129 16:31:34.465600 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="ceilometer-central-agent" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.465606 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="ceilometer-central-agent" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.465756 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="proxy-httpd" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.465779 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="ceilometer-notification-agent" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.465797 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="ceilometer-central-agent" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.465806 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" containerName="sg-core" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.468487 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.470883 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.472180 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.495428 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.592717 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.592803 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-run-httpd\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.592858 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hqc\" (UniqueName: \"kubernetes.io/projected/c483c1b3-368a-4668-b867-16892c1b7fd2-kube-api-access-m4hqc\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.592948 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-scripts\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.592973 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-log-httpd\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.593006 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-config-data\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.593046 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.694914 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.694981 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.695008 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-run-httpd\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.695044 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4hqc\" (UniqueName: \"kubernetes.io/projected/c483c1b3-368a-4668-b867-16892c1b7fd2-kube-api-access-m4hqc\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.695106 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-scripts\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.695123 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-log-httpd\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.695149 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-config-data\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.696598 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-run-httpd\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.696859 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-log-httpd\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.702277 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.703039 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-scripts\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.703376 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.703947 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-config-data\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.712710 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4hqc\" (UniqueName: \"kubernetes.io/projected/c483c1b3-368a-4668-b867-16892c1b7fd2-kube-api-access-m4hqc\") pod \"ceilometer-0\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " pod="openstack/ceilometer-0" Jan 29 16:31:34 crc kubenswrapper[4895]: I0129 16:31:34.808831 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:31:35 crc kubenswrapper[4895]: I0129 16:31:35.061682 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08ccd04e-f148-46a4-88aa-b488fa132756" path="/var/lib/kubelet/pods/08ccd04e-f148-46a4-88aa-b488fa132756/volumes" Jan 29 16:31:35 crc kubenswrapper[4895]: I0129 16:31:35.099439 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k74gb" event={"ID":"066caef9-34c4-40a1-b7d4-cdfb48c02fe4","Type":"ContainerStarted","Data":"589cebf9efc55704e524d2404473ab8e5ebae744d498ea8eeba43dd05a1b3c47"} Jan 29 16:31:35 crc kubenswrapper[4895]: I0129 16:31:35.129292 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:35 crc kubenswrapper[4895]: I0129 16:31:35.129694 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-k74gb" podStartSLOduration=7.289499439 podStartE2EDuration="15.129683894s" podCreationTimestamp="2026-01-29 16:31:20 +0000 UTC" firstStartedPulling="2026-01-29 16:31:26.013725652 +0000 UTC m=+1169.816702916" lastFinishedPulling="2026-01-29 16:31:33.853910097 +0000 UTC m=+1177.656887371" observedRunningTime="2026-01-29 16:31:35.125338907 +0000 UTC m=+1178.928316191" watchObservedRunningTime="2026-01-29 16:31:35.129683894 +0000 UTC m=+1178.932661148" Jan 29 16:31:35 crc kubenswrapper[4895]: I0129 16:31:35.131615 4895 generic.go:334] "Generic (PLEG): container finished" podID="c992c27a-1165-4c22-99e3-67bee151dfb4" containerID="5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b" exitCode=143 Jan 29 16:31:35 crc kubenswrapper[4895]: I0129 16:31:35.131694 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bb8677c6-sfxjh" event={"ID":"c992c27a-1165-4c22-99e3-67bee151dfb4","Type":"ContainerDied","Data":"5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b"} Jan 29 16:31:36 crc kubenswrapper[4895]: I0129 16:31:36.148900 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c483c1b3-368a-4668-b867-16892c1b7fd2","Type":"ContainerStarted","Data":"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d"} Jan 29 16:31:36 crc kubenswrapper[4895]: I0129 16:31:36.149423 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c483c1b3-368a-4668-b867-16892c1b7fd2","Type":"ContainerStarted","Data":"d1ddb098975529dda56d0360b518a769814ac355fc636ce05db2c27a63e0ca67"} Jan 29 16:31:37 crc kubenswrapper[4895]: I0129 16:31:37.160855 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c483c1b3-368a-4668-b867-16892c1b7fd2","Type":"ContainerStarted","Data":"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be"} Jan 29 16:31:37 crc kubenswrapper[4895]: E0129 16:31:37.347035 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:31:37 crc kubenswrapper[4895]: E0129 16:31:37.347306 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4hqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c483c1b3-368a-4668-b867-16892c1b7fd2): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:37 crc kubenswrapper[4895]: E0129 16:31:37.348511 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.034094 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.115023 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.170681 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-internal-tls-certs\") pod \"c992c27a-1165-4c22-99e3-67bee151dfb4\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.170764 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-public-tls-certs\") pod \"c992c27a-1165-4c22-99e3-67bee151dfb4\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.170988 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-combined-ca-bundle\") pod \"c992c27a-1165-4c22-99e3-67bee151dfb4\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.171562 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c992c27a-1165-4c22-99e3-67bee151dfb4-logs\") pod \"c992c27a-1165-4c22-99e3-67bee151dfb4\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.171609 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-scripts\") pod \"c992c27a-1165-4c22-99e3-67bee151dfb4\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.171690 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdj4f\" (UniqueName: \"kubernetes.io/projected/c992c27a-1165-4c22-99e3-67bee151dfb4-kube-api-access-mdj4f\") pod \"c992c27a-1165-4c22-99e3-67bee151dfb4\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.171768 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-config-data\") pod \"c992c27a-1165-4c22-99e3-67bee151dfb4\" (UID: \"c992c27a-1165-4c22-99e3-67bee151dfb4\") " Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.172054 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c992c27a-1165-4c22-99e3-67bee151dfb4-logs" (OuterVolumeSpecName: "logs") pod "c992c27a-1165-4c22-99e3-67bee151dfb4" (UID: "c992c27a-1165-4c22-99e3-67bee151dfb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.172674 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c992c27a-1165-4c22-99e3-67bee151dfb4-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.177797 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-scripts" (OuterVolumeSpecName: "scripts") pod "c992c27a-1165-4c22-99e3-67bee151dfb4" (UID: "c992c27a-1165-4c22-99e3-67bee151dfb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.178151 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c992c27a-1165-4c22-99e3-67bee151dfb4-kube-api-access-mdj4f" (OuterVolumeSpecName: "kube-api-access-mdj4f") pod "c992c27a-1165-4c22-99e3-67bee151dfb4" (UID: "c992c27a-1165-4c22-99e3-67bee151dfb4"). InnerVolumeSpecName "kube-api-access-mdj4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.179451 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c483c1b3-368a-4668-b867-16892c1b7fd2","Type":"ContainerStarted","Data":"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322"} Jan 29 16:31:38 crc kubenswrapper[4895]: E0129 16:31:38.184811 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.187539 4895 generic.go:334] "Generic (PLEG): container finished" podID="c992c27a-1165-4c22-99e3-67bee151dfb4" containerID="73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0" exitCode=0 Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.187663 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bb8677c6-sfxjh" event={"ID":"c992c27a-1165-4c22-99e3-67bee151dfb4","Type":"ContainerDied","Data":"73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0"} Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.187755 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84bb8677c6-sfxjh" event={"ID":"c992c27a-1165-4c22-99e3-67bee151dfb4","Type":"ContainerDied","Data":"86e76405b100d8204ab01146ffc67dfdea00cc682573acaea96a3a84080177f1"} Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.187858 4895 scope.go:117] "RemoveContainer" containerID="73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.188082 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84bb8677c6-sfxjh" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.274946 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.274982 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdj4f\" (UniqueName: \"kubernetes.io/projected/c992c27a-1165-4c22-99e3-67bee151dfb4-kube-api-access-mdj4f\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.288581 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c992c27a-1165-4c22-99e3-67bee151dfb4" (UID: "c992c27a-1165-4c22-99e3-67bee151dfb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.296978 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-config-data" (OuterVolumeSpecName: "config-data") pod "c992c27a-1165-4c22-99e3-67bee151dfb4" (UID: "c992c27a-1165-4c22-99e3-67bee151dfb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.328234 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c992c27a-1165-4c22-99e3-67bee151dfb4" (UID: "c992c27a-1165-4c22-99e3-67bee151dfb4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.346471 4895 scope.go:117] "RemoveContainer" containerID="5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.363187 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c992c27a-1165-4c22-99e3-67bee151dfb4" (UID: "c992c27a-1165-4c22-99e3-67bee151dfb4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.376699 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.376773 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.376787 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.376798 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c992c27a-1165-4c22-99e3-67bee151dfb4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.383067 4895 scope.go:117] "RemoveContainer" containerID="73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0" Jan 29 16:31:38 crc kubenswrapper[4895]: E0129 16:31:38.383570 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0\": container with ID starting with 73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0 not found: ID does not exist" containerID="73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.383697 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0"} err="failed to get container status \"73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0\": rpc error: code = NotFound desc = could not find container \"73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0\": container with ID starting with 73efc05bd96b69c137d019897964d5b4c36b5247cb95420f086fa322d96e36f0 not found: ID does not exist" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.383785 4895 scope.go:117] "RemoveContainer" containerID="5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b" Jan 29 16:31:38 crc kubenswrapper[4895]: E0129 16:31:38.384170 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b\": container with ID starting with 5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b not found: ID does not exist" containerID="5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.384249 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b"} err="failed to get container status \"5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b\": rpc error: code = NotFound desc = could not find container \"5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b\": container with ID starting with 5a753e7cf1244a1cb5e24d027204daf11cc07183880da99c35be5fddf105349b not found: ID does not exist" Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.522664 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-84bb8677c6-sfxjh"] Jan 29 16:31:38 crc kubenswrapper[4895]: I0129 16:31:38.533278 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-84bb8677c6-sfxjh"] Jan 29 16:31:39 crc kubenswrapper[4895]: I0129 16:31:39.048491 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c992c27a-1165-4c22-99e3-67bee151dfb4" path="/var/lib/kubelet/pods/c992c27a-1165-4c22-99e3-67bee151dfb4/volumes" Jan 29 16:31:39 crc kubenswrapper[4895]: I0129 16:31:39.207435 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="ceilometer-central-agent" containerID="cri-o://c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d" gracePeriod=30 Jan 29 16:31:39 crc kubenswrapper[4895]: I0129 16:31:39.209271 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="ceilometer-notification-agent" containerID="cri-o://069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be" gracePeriod=30 Jan 29 16:31:39 crc kubenswrapper[4895]: I0129 16:31:39.209748 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="sg-core" containerID="cri-o://601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322" gracePeriod=30 Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.200451 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.221129 4895 generic.go:334] "Generic (PLEG): container finished" podID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerID="601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322" exitCode=2 Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.221184 4895 generic.go:334] "Generic (PLEG): container finished" podID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerID="069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be" exitCode=0 Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.221193 4895 generic.go:334] "Generic (PLEG): container finished" podID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerID="c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d" exitCode=0 Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.221227 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.221232 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c483c1b3-368a-4668-b867-16892c1b7fd2","Type":"ContainerDied","Data":"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322"} Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.221339 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c483c1b3-368a-4668-b867-16892c1b7fd2","Type":"ContainerDied","Data":"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be"} Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.221353 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c483c1b3-368a-4668-b867-16892c1b7fd2","Type":"ContainerDied","Data":"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d"} Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.221364 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c483c1b3-368a-4668-b867-16892c1b7fd2","Type":"ContainerDied","Data":"d1ddb098975529dda56d0360b518a769814ac355fc636ce05db2c27a63e0ca67"} Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.221382 4895 scope.go:117] "RemoveContainer" containerID="601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.250481 4895 scope.go:117] "RemoveContainer" containerID="069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.272899 4895 scope.go:117] "RemoveContainer" containerID="c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.300180 4895 scope.go:117] "RemoveContainer" containerID="601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322" Jan 29 16:31:40 crc kubenswrapper[4895]: E0129 16:31:40.308213 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322\": container with ID starting with 601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322 not found: ID does not exist" containerID="601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.308270 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322"} err="failed to get container status \"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322\": rpc error: code = NotFound desc = could not find container \"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322\": container with ID starting with 601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322 not found: ID does not exist" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.308307 4895 scope.go:117] "RemoveContainer" containerID="069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be" Jan 29 16:31:40 crc kubenswrapper[4895]: E0129 16:31:40.309104 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be\": container with ID starting with 069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be not found: ID does not exist" containerID="069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.309142 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be"} err="failed to get container status \"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be\": rpc error: code = NotFound desc = could not find container \"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be\": container with ID starting with 069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be not found: ID does not exist" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.309165 4895 scope.go:117] "RemoveContainer" containerID="c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d" Jan 29 16:31:40 crc kubenswrapper[4895]: E0129 16:31:40.309980 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d\": container with ID starting with c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d not found: ID does not exist" containerID="c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.310031 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d"} err="failed to get container status \"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d\": rpc error: code = NotFound desc = could not find container \"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d\": container with ID starting with c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d not found: ID does not exist" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.310065 4895 scope.go:117] "RemoveContainer" containerID="601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.310582 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322"} err="failed to get container status \"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322\": rpc error: code = NotFound desc = could not find container \"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322\": container with ID starting with 601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322 not found: ID does not exist" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.310613 4895 scope.go:117] "RemoveContainer" containerID="069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.310791 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be"} err="failed to get container status \"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be\": rpc error: code = NotFound desc = could not find container \"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be\": container with ID starting with 069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be not found: ID does not exist" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.310810 4895 scope.go:117] "RemoveContainer" containerID="c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.311199 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d"} err="failed to get container status \"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d\": rpc error: code = NotFound desc = could not find container \"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d\": container with ID starting with c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d not found: ID does not exist" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.311217 4895 scope.go:117] "RemoveContainer" containerID="601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.311392 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322"} err="failed to get container status \"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322\": rpc error: code = NotFound desc = could not find container \"601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322\": container with ID starting with 601788cffb66a515d69192a542dd28bc51a873eb89f7b33797c2b73843a02322 not found: ID does not exist" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.311409 4895 scope.go:117] "RemoveContainer" containerID="069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.311577 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be"} err="failed to get container status \"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be\": rpc error: code = NotFound desc = could not find container \"069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be\": container with ID starting with 069f1eb5b334b63d40ae89921abc45e9eb872e145bac7d483ca96083f04e87be not found: ID does not exist" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.311602 4895 scope.go:117] "RemoveContainer" containerID="c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.311820 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d"} err="failed to get container status \"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d\": rpc error: code = NotFound desc = could not find container \"c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d\": container with ID starting with c45a07d4685e64e8364778d984fa64bb4a340151aa1caa4356cf1d6389a5747d not found: ID does not exist" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.317057 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-config-data\") pod \"c483c1b3-368a-4668-b867-16892c1b7fd2\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.317127 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-log-httpd\") pod \"c483c1b3-368a-4668-b867-16892c1b7fd2\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.317175 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-run-httpd\") pod \"c483c1b3-368a-4668-b867-16892c1b7fd2\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.317229 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4hqc\" (UniqueName: \"kubernetes.io/projected/c483c1b3-368a-4668-b867-16892c1b7fd2-kube-api-access-m4hqc\") pod \"c483c1b3-368a-4668-b867-16892c1b7fd2\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.317340 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-scripts\") pod \"c483c1b3-368a-4668-b867-16892c1b7fd2\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.317630 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c483c1b3-368a-4668-b867-16892c1b7fd2" (UID: "c483c1b3-368a-4668-b867-16892c1b7fd2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.317674 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-combined-ca-bundle\") pod \"c483c1b3-368a-4668-b867-16892c1b7fd2\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.317997 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c483c1b3-368a-4668-b867-16892c1b7fd2" (UID: "c483c1b3-368a-4668-b867-16892c1b7fd2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.318601 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-sg-core-conf-yaml\") pod \"c483c1b3-368a-4668-b867-16892c1b7fd2\" (UID: \"c483c1b3-368a-4668-b867-16892c1b7fd2\") " Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.319594 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.319654 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c483c1b3-368a-4668-b867-16892c1b7fd2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.325115 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-scripts" (OuterVolumeSpecName: "scripts") pod "c483c1b3-368a-4668-b867-16892c1b7fd2" (UID: "c483c1b3-368a-4668-b867-16892c1b7fd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.325320 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c483c1b3-368a-4668-b867-16892c1b7fd2-kube-api-access-m4hqc" (OuterVolumeSpecName: "kube-api-access-m4hqc") pod "c483c1b3-368a-4668-b867-16892c1b7fd2" (UID: "c483c1b3-368a-4668-b867-16892c1b7fd2"). InnerVolumeSpecName "kube-api-access-m4hqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.347815 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c483c1b3-368a-4668-b867-16892c1b7fd2" (UID: "c483c1b3-368a-4668-b867-16892c1b7fd2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.370330 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c483c1b3-368a-4668-b867-16892c1b7fd2" (UID: "c483c1b3-368a-4668-b867-16892c1b7fd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.372732 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-config-data" (OuterVolumeSpecName: "config-data") pod "c483c1b3-368a-4668-b867-16892c1b7fd2" (UID: "c483c1b3-368a-4668-b867-16892c1b7fd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.420952 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4hqc\" (UniqueName: \"kubernetes.io/projected/c483c1b3-368a-4668-b867-16892c1b7fd2-kube-api-access-m4hqc\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.420993 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.421004 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.421014 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.421022 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c483c1b3-368a-4668-b867-16892c1b7fd2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.617481 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.638166 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.646718 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:40 crc kubenswrapper[4895]: E0129 16:31:40.647249 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c992c27a-1165-4c22-99e3-67bee151dfb4" containerName="placement-log" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.647274 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c992c27a-1165-4c22-99e3-67bee151dfb4" containerName="placement-log" Jan 29 16:31:40 crc kubenswrapper[4895]: E0129 16:31:40.647295 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="ceilometer-central-agent" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.647304 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="ceilometer-central-agent" Jan 29 16:31:40 crc kubenswrapper[4895]: E0129 16:31:40.647321 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c992c27a-1165-4c22-99e3-67bee151dfb4" containerName="placement-api" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.647327 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c992c27a-1165-4c22-99e3-67bee151dfb4" containerName="placement-api" Jan 29 16:31:40 crc kubenswrapper[4895]: E0129 16:31:40.647338 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="ceilometer-notification-agent" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.647345 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="ceilometer-notification-agent" Jan 29 16:31:40 crc kubenswrapper[4895]: E0129 16:31:40.647364 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="sg-core" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.647370 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="sg-core" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.647641 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c992c27a-1165-4c22-99e3-67bee151dfb4" containerName="placement-api" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.647669 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="ceilometer-notification-agent" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.647687 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="ceilometer-central-agent" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.647698 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c992c27a-1165-4c22-99e3-67bee151dfb4" containerName="placement-log" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.647713 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" containerName="sg-core" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.649589 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.655518 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.656360 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.656618 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.726032 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-log-httpd\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.726097 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-496vw\" (UniqueName: \"kubernetes.io/projected/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-kube-api-access-496vw\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.726125 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.726294 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-run-httpd\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.726450 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-config-data\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.726494 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.727119 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-scripts\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.829547 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-log-httpd\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.829599 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-496vw\" (UniqueName: \"kubernetes.io/projected/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-kube-api-access-496vw\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.829634 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.829653 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-run-httpd\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.829671 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-config-data\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.830043 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.830085 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-scripts\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.830262 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-run-httpd\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.830287 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-log-httpd\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.834754 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-scripts\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.835103 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.835397 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.837238 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-config-data\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.847991 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-496vw\" (UniqueName: \"kubernetes.io/projected/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-kube-api-access-496vw\") pod \"ceilometer-0\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " pod="openstack/ceilometer-0" Jan 29 16:31:40 crc kubenswrapper[4895]: I0129 16:31:40.985720 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:31:41 crc kubenswrapper[4895]: I0129 16:31:41.052355 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c483c1b3-368a-4668-b867-16892c1b7fd2" path="/var/lib/kubelet/pods/c483c1b3-368a-4668-b867-16892c1b7fd2/volumes" Jan 29 16:31:41 crc kubenswrapper[4895]: W0129 16:31:41.451300 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06d3cbc6_fffd_45bd_b5cf_4cbd6d936428.slice/crio-a749f3e33ad71183806fb115cbbca7a92f775d0470f6a3b559b1fcfb6081cbbe WatchSource:0}: Error finding container a749f3e33ad71183806fb115cbbca7a92f775d0470f6a3b559b1fcfb6081cbbe: Status 404 returned error can't find the container with id a749f3e33ad71183806fb115cbbca7a92f775d0470f6a3b559b1fcfb6081cbbe Jan 29 16:31:41 crc kubenswrapper[4895]: I0129 16:31:41.456030 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:42 crc kubenswrapper[4895]: I0129 16:31:42.247411 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428","Type":"ContainerStarted","Data":"db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283"} Jan 29 16:31:42 crc kubenswrapper[4895]: I0129 16:31:42.247951 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428","Type":"ContainerStarted","Data":"a749f3e33ad71183806fb115cbbca7a92f775d0470f6a3b559b1fcfb6081cbbe"} Jan 29 16:31:43 crc kubenswrapper[4895]: I0129 16:31:43.261667 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428","Type":"ContainerStarted","Data":"5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a"} Jan 29 16:31:43 crc kubenswrapper[4895]: E0129 16:31:43.459357 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:31:43 crc kubenswrapper[4895]: E0129 16:31:43.459928 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-496vw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(06d3cbc6-fffd-45bd-b5cf-4cbd6d936428): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:43 crc kubenswrapper[4895]: E0129 16:31:43.461349 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" Jan 29 16:31:44 crc kubenswrapper[4895]: I0129 16:31:44.274519 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428","Type":"ContainerStarted","Data":"8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02"} Jan 29 16:31:44 crc kubenswrapper[4895]: E0129 16:31:44.277772 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" Jan 29 16:31:45 crc kubenswrapper[4895]: E0129 16:31:45.286121 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" Jan 29 16:31:46 crc kubenswrapper[4895]: I0129 16:31:46.302686 4895 generic.go:334] "Generic (PLEG): container finished" podID="066caef9-34c4-40a1-b7d4-cdfb48c02fe4" containerID="589cebf9efc55704e524d2404473ab8e5ebae744d498ea8eeba43dd05a1b3c47" exitCode=0 Jan 29 16:31:46 crc kubenswrapper[4895]: I0129 16:31:46.302738 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k74gb" event={"ID":"066caef9-34c4-40a1-b7d4-cdfb48c02fe4","Type":"ContainerDied","Data":"589cebf9efc55704e524d2404473ab8e5ebae744d498ea8eeba43dd05a1b3c47"} Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.753596 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.882378 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-combined-ca-bundle\") pod \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.882606 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-scripts\") pod \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.884298 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-config-data\") pod \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.884369 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pbkh\" (UniqueName: \"kubernetes.io/projected/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-kube-api-access-2pbkh\") pod \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\" (UID: \"066caef9-34c4-40a1-b7d4-cdfb48c02fe4\") " Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.896971 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-scripts" (OuterVolumeSpecName: "scripts") pod "066caef9-34c4-40a1-b7d4-cdfb48c02fe4" (UID: "066caef9-34c4-40a1-b7d4-cdfb48c02fe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.897484 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-kube-api-access-2pbkh" (OuterVolumeSpecName: "kube-api-access-2pbkh") pod "066caef9-34c4-40a1-b7d4-cdfb48c02fe4" (UID: "066caef9-34c4-40a1-b7d4-cdfb48c02fe4"). InnerVolumeSpecName "kube-api-access-2pbkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.911432 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-config-data" (OuterVolumeSpecName: "config-data") pod "066caef9-34c4-40a1-b7d4-cdfb48c02fe4" (UID: "066caef9-34c4-40a1-b7d4-cdfb48c02fe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.914264 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "066caef9-34c4-40a1-b7d4-cdfb48c02fe4" (UID: "066caef9-34c4-40a1-b7d4-cdfb48c02fe4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.987037 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.987074 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.987084 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:47 crc kubenswrapper[4895]: I0129 16:31:47.987095 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pbkh\" (UniqueName: \"kubernetes.io/projected/066caef9-34c4-40a1-b7d4-cdfb48c02fe4-kube-api-access-2pbkh\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.327461 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-k74gb" event={"ID":"066caef9-34c4-40a1-b7d4-cdfb48c02fe4","Type":"ContainerDied","Data":"1bbf1eb5bb4470056aa671a313768f062d59e6c74b8eaccd560bb15325222c7d"} Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.327798 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bbf1eb5bb4470056aa671a313768f062d59e6c74b8eaccd560bb15325222c7d" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.327517 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-k74gb" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.456517 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:31:48 crc kubenswrapper[4895]: E0129 16:31:48.457000 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066caef9-34c4-40a1-b7d4-cdfb48c02fe4" containerName="nova-cell0-conductor-db-sync" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.457022 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="066caef9-34c4-40a1-b7d4-cdfb48c02fe4" containerName="nova-cell0-conductor-db-sync" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.457227 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="066caef9-34c4-40a1-b7d4-cdfb48c02fe4" containerName="nova-cell0-conductor-db-sync" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.457862 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.468294 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-558n5" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.468526 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.472556 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.602754 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.602842 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gp2c\" (UniqueName: \"kubernetes.io/projected/2d911040-c2e2-4da1-9291-f13f91e6fb87-kube-api-access-5gp2c\") pod \"nova-cell0-conductor-0\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.603043 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.704566 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.704654 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.704694 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gp2c\" (UniqueName: \"kubernetes.io/projected/2d911040-c2e2-4da1-9291-f13f91e6fb87-kube-api-access-5gp2c\") pod \"nova-cell0-conductor-0\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.712204 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.723669 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.730646 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gp2c\" (UniqueName: \"kubernetes.io/projected/2d911040-c2e2-4da1-9291-f13f91e6fb87-kube-api-access-5gp2c\") pod \"nova-cell0-conductor-0\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:48 crc kubenswrapper[4895]: I0129 16:31:48.818019 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:49 crc kubenswrapper[4895]: I0129 16:31:49.292425 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:31:49 crc kubenswrapper[4895]: I0129 16:31:49.351262 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d911040-c2e2-4da1-9291-f13f91e6fb87","Type":"ContainerStarted","Data":"cc737a2304a5496da62753d13cd1b3a4bed8cc096c962b6c15853b8793455be6"} Jan 29 16:31:50 crc kubenswrapper[4895]: I0129 16:31:50.365171 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d911040-c2e2-4da1-9291-f13f91e6fb87","Type":"ContainerStarted","Data":"459fcad5ecf9ce397403ef7183604d522b848035382206c6922cc2c7fb019031"} Jan 29 16:31:50 crc kubenswrapper[4895]: I0129 16:31:50.365612 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:50 crc kubenswrapper[4895]: I0129 16:31:50.400427 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.400398385 podStartE2EDuration="2.400398385s" podCreationTimestamp="2026-01-29 16:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:50.386786386 +0000 UTC m=+1194.189763690" watchObservedRunningTime="2026-01-29 16:31:50.400398385 +0000 UTC m=+1194.203375659" Jan 29 16:31:56 crc kubenswrapper[4895]: I0129 16:31:56.221407 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:31:56 crc kubenswrapper[4895]: I0129 16:31:56.222529 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="2d911040-c2e2-4da1-9291-f13f91e6fb87" containerName="nova-cell0-conductor-conductor" containerID="cri-o://459fcad5ecf9ce397403ef7183604d522b848035382206c6922cc2c7fb019031" gracePeriod=30 Jan 29 16:31:56 crc kubenswrapper[4895]: E0129 16:31:56.226148 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="459fcad5ecf9ce397403ef7183604d522b848035382206c6922cc2c7fb019031" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:31:56 crc kubenswrapper[4895]: E0129 16:31:56.228256 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="459fcad5ecf9ce397403ef7183604d522b848035382206c6922cc2c7fb019031" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:31:56 crc kubenswrapper[4895]: E0129 16:31:56.232861 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="459fcad5ecf9ce397403ef7183604d522b848035382206c6922cc2c7fb019031" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:31:56 crc kubenswrapper[4895]: E0129 16:31:56.233003 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="2d911040-c2e2-4da1-9291-f13f91e6fb87" containerName="nova-cell0-conductor-conductor" Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.323530 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.324403 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="ceilometer-central-agent" containerID="cri-o://db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283" gracePeriod=30 Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.325054 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="sg-core" containerID="cri-o://8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02" gracePeriod=30 Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.325129 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="ceilometer-notification-agent" containerID="cri-o://5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a" gracePeriod=30 Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.452530 4895 generic.go:334] "Generic (PLEG): container finished" podID="2d911040-c2e2-4da1-9291-f13f91e6fb87" containerID="459fcad5ecf9ce397403ef7183604d522b848035382206c6922cc2c7fb019031" exitCode=0 Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.452591 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d911040-c2e2-4da1-9291-f13f91e6fb87","Type":"ContainerDied","Data":"459fcad5ecf9ce397403ef7183604d522b848035382206c6922cc2c7fb019031"} Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.452630 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d911040-c2e2-4da1-9291-f13f91e6fb87","Type":"ContainerDied","Data":"cc737a2304a5496da62753d13cd1b3a4bed8cc096c962b6c15853b8793455be6"} Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.452645 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc737a2304a5496da62753d13cd1b3a4bed8cc096c962b6c15853b8793455be6" Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.521165 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.613802 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-config-data\") pod \"2d911040-c2e2-4da1-9291-f13f91e6fb87\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.614179 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gp2c\" (UniqueName: \"kubernetes.io/projected/2d911040-c2e2-4da1-9291-f13f91e6fb87-kube-api-access-5gp2c\") pod \"2d911040-c2e2-4da1-9291-f13f91e6fb87\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.614426 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-combined-ca-bundle\") pod \"2d911040-c2e2-4da1-9291-f13f91e6fb87\" (UID: \"2d911040-c2e2-4da1-9291-f13f91e6fb87\") " Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.623668 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d911040-c2e2-4da1-9291-f13f91e6fb87-kube-api-access-5gp2c" (OuterVolumeSpecName: "kube-api-access-5gp2c") pod "2d911040-c2e2-4da1-9291-f13f91e6fb87" (UID: "2d911040-c2e2-4da1-9291-f13f91e6fb87"). InnerVolumeSpecName "kube-api-access-5gp2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.673092 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-config-data" (OuterVolumeSpecName: "config-data") pod "2d911040-c2e2-4da1-9291-f13f91e6fb87" (UID: "2d911040-c2e2-4da1-9291-f13f91e6fb87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.676218 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d911040-c2e2-4da1-9291-f13f91e6fb87" (UID: "2d911040-c2e2-4da1-9291-f13f91e6fb87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.718315 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.718566 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d911040-c2e2-4da1-9291-f13f91e6fb87-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.718586 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gp2c\" (UniqueName: \"kubernetes.io/projected/2d911040-c2e2-4da1-9291-f13f91e6fb87-kube-api-access-5gp2c\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.824209 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:31:57 crc kubenswrapper[4895]: I0129 16:31:57.824281 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.469639 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428","Type":"ContainerDied","Data":"8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02"} Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.469609 4895 generic.go:334] "Generic (PLEG): container finished" podID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerID="8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02" exitCode=2 Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.470160 4895 generic.go:334] "Generic (PLEG): container finished" podID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerID="db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283" exitCode=0 Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.470223 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428","Type":"ContainerDied","Data":"db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283"} Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.470289 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.521291 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.536485 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.550837 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:31:58 crc kubenswrapper[4895]: E0129 16:31:58.551687 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d911040-c2e2-4da1-9291-f13f91e6fb87" containerName="nova-cell0-conductor-conductor" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.551722 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d911040-c2e2-4da1-9291-f13f91e6fb87" containerName="nova-cell0-conductor-conductor" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.552063 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d911040-c2e2-4da1-9291-f13f91e6fb87" containerName="nova-cell0-conductor-conductor" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.553172 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.555513 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.557314 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-558n5" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.560750 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.646842 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qnfk\" (UniqueName: \"kubernetes.io/projected/a4c21cd5-3241-44a4-a189-025ef2084f9d-kube-api-access-2qnfk\") pod \"nova-cell0-conductor-0\" (UID: \"a4c21cd5-3241-44a4-a189-025ef2084f9d\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.647328 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c21cd5-3241-44a4-a189-025ef2084f9d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a4c21cd5-3241-44a4-a189-025ef2084f9d\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.647729 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c21cd5-3241-44a4-a189-025ef2084f9d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a4c21cd5-3241-44a4-a189-025ef2084f9d\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.750285 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c21cd5-3241-44a4-a189-025ef2084f9d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a4c21cd5-3241-44a4-a189-025ef2084f9d\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.750834 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qnfk\" (UniqueName: \"kubernetes.io/projected/a4c21cd5-3241-44a4-a189-025ef2084f9d-kube-api-access-2qnfk\") pod \"nova-cell0-conductor-0\" (UID: \"a4c21cd5-3241-44a4-a189-025ef2084f9d\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.751165 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c21cd5-3241-44a4-a189-025ef2084f9d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a4c21cd5-3241-44a4-a189-025ef2084f9d\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.759768 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c21cd5-3241-44a4-a189-025ef2084f9d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a4c21cd5-3241-44a4-a189-025ef2084f9d\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.768567 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c21cd5-3241-44a4-a189-025ef2084f9d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a4c21cd5-3241-44a4-a189-025ef2084f9d\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.774080 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qnfk\" (UniqueName: \"kubernetes.io/projected/a4c21cd5-3241-44a4-a189-025ef2084f9d-kube-api-access-2qnfk\") pod \"nova-cell0-conductor-0\" (UID: \"a4c21cd5-3241-44a4-a189-025ef2084f9d\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:58 crc kubenswrapper[4895]: I0129 16:31:58.883443 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:59 crc kubenswrapper[4895]: I0129 16:31:59.053235 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d911040-c2e2-4da1-9291-f13f91e6fb87" path="/var/lib/kubelet/pods/2d911040-c2e2-4da1-9291-f13f91e6fb87/volumes" Jan 29 16:31:59 crc kubenswrapper[4895]: I0129 16:31:59.195549 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:31:59 crc kubenswrapper[4895]: W0129 16:31:59.195892 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4c21cd5_3241_44a4_a189_025ef2084f9d.slice/crio-475dc3e2be606d97b6adb8af9447c8ab50e3bca1e526682f7b28371da42e7fdc WatchSource:0}: Error finding container 475dc3e2be606d97b6adb8af9447c8ab50e3bca1e526682f7b28371da42e7fdc: Status 404 returned error can't find the container with id 475dc3e2be606d97b6adb8af9447c8ab50e3bca1e526682f7b28371da42e7fdc Jan 29 16:31:59 crc kubenswrapper[4895]: I0129 16:31:59.481191 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a4c21cd5-3241-44a4-a189-025ef2084f9d","Type":"ContainerStarted","Data":"92b4d7548a970e5a867f3c0af4a80e4055190ce84fa1ea8ac41c546a00c8022a"} Jan 29 16:31:59 crc kubenswrapper[4895]: I0129 16:31:59.481258 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a4c21cd5-3241-44a4-a189-025ef2084f9d","Type":"ContainerStarted","Data":"475dc3e2be606d97b6adb8af9447c8ab50e3bca1e526682f7b28371da42e7fdc"} Jan 29 16:31:59 crc kubenswrapper[4895]: I0129 16:31:59.481389 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 16:31:59 crc kubenswrapper[4895]: I0129 16:31:59.505321 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.505295506 podStartE2EDuration="1.505295506s" podCreationTimestamp="2026-01-29 16:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:59.497934506 +0000 UTC m=+1203.300911830" watchObservedRunningTime="2026-01-29 16:31:59.505295506 +0000 UTC m=+1203.308272760" Jan 29 16:31:59 crc kubenswrapper[4895]: I0129 16:31:59.964940 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.081628 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-run-httpd\") pod \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.081748 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-496vw\" (UniqueName: \"kubernetes.io/projected/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-kube-api-access-496vw\") pod \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.081914 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-log-httpd\") pod \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.082131 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-combined-ca-bundle\") pod \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.082698 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" (UID: "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.082953 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" (UID: "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.083018 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-sg-core-conf-yaml\") pod \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.083077 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-scripts\") pod \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.083105 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-config-data\") pod \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\" (UID: \"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428\") " Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.083738 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.083762 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.091018 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-scripts" (OuterVolumeSpecName: "scripts") pod "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" (UID: "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.091149 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-kube-api-access-496vw" (OuterVolumeSpecName: "kube-api-access-496vw") pod "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" (UID: "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428"). InnerVolumeSpecName "kube-api-access-496vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.113392 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" (UID: "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.140816 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-config-data" (OuterVolumeSpecName: "config-data") pod "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" (UID: "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.156651 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" (UID: "06d3cbc6-fffd-45bd-b5cf-4cbd6d936428"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.187522 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.187881 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.187945 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.188022 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.188086 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-496vw\" (UniqueName: \"kubernetes.io/projected/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428-kube-api-access-496vw\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.497474 4895 generic.go:334] "Generic (PLEG): container finished" podID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerID="5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a" exitCode=0 Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.499629 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.508191 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428","Type":"ContainerDied","Data":"5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a"} Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.508653 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06d3cbc6-fffd-45bd-b5cf-4cbd6d936428","Type":"ContainerDied","Data":"a749f3e33ad71183806fb115cbbca7a92f775d0470f6a3b559b1fcfb6081cbbe"} Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.508734 4895 scope.go:117] "RemoveContainer" containerID="8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.543515 4895 scope.go:117] "RemoveContainer" containerID="5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.582192 4895 scope.go:117] "RemoveContainer" containerID="db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.619304 4895 scope.go:117] "RemoveContainer" containerID="8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02" Jan 29 16:32:00 crc kubenswrapper[4895]: E0129 16:32:00.619888 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02\": container with ID starting with 8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02 not found: ID does not exist" containerID="8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.619946 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02"} err="failed to get container status \"8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02\": rpc error: code = NotFound desc = could not find container \"8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02\": container with ID starting with 8e6f60a2fc9dc1ab75a47b15f57feca2957a779fb28c46d8b9f8712f6cd24d02 not found: ID does not exist" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.619978 4895 scope.go:117] "RemoveContainer" containerID="5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a" Jan 29 16:32:00 crc kubenswrapper[4895]: E0129 16:32:00.620361 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a\": container with ID starting with 5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a not found: ID does not exist" containerID="5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.620383 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a"} err="failed to get container status \"5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a\": rpc error: code = NotFound desc = could not find container \"5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a\": container with ID starting with 5759b783e8c839c853e981b47555325f9b148167fa338fab469911a5c68cac2a not found: ID does not exist" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.620402 4895 scope.go:117] "RemoveContainer" containerID="db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283" Jan 29 16:32:00 crc kubenswrapper[4895]: E0129 16:32:00.626942 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283\": container with ID starting with db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283 not found: ID does not exist" containerID="db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.627007 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283"} err="failed to get container status \"db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283\": rpc error: code = NotFound desc = could not find container \"db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283\": container with ID starting with db614c20eb8b815bac3af1c1bb5d18cf9dfa82a62b8282aed46f9de05a646283 not found: ID does not exist" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.640163 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.662197 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.675953 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:32:00 crc kubenswrapper[4895]: E0129 16:32:00.676533 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="ceilometer-central-agent" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.676562 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="ceilometer-central-agent" Jan 29 16:32:00 crc kubenswrapper[4895]: E0129 16:32:00.676599 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="ceilometer-notification-agent" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.676607 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="ceilometer-notification-agent" Jan 29 16:32:00 crc kubenswrapper[4895]: E0129 16:32:00.676619 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="sg-core" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.676627 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="sg-core" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.676787 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="ceilometer-notification-agent" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.676806 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="ceilometer-central-agent" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.676816 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" containerName="sg-core" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.678827 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.683265 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.683477 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.685031 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.806068 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.806147 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-scripts\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.806294 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-log-httpd\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.806353 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.806394 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-run-httpd\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.806851 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccgnk\" (UniqueName: \"kubernetes.io/projected/e09c98c7-08b6-4e32-b310-d545896b1d5a-kube-api-access-ccgnk\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.806979 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-config-data\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.909250 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccgnk\" (UniqueName: \"kubernetes.io/projected/e09c98c7-08b6-4e32-b310-d545896b1d5a-kube-api-access-ccgnk\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.909302 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-config-data\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.909381 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.909405 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-scripts\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.909446 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-log-httpd\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.909479 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.909499 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-run-httpd\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.910018 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-run-httpd\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.912738 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-log-httpd\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.919522 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-config-data\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.925944 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.926774 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.929204 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccgnk\" (UniqueName: \"kubernetes.io/projected/e09c98c7-08b6-4e32-b310-d545896b1d5a-kube-api-access-ccgnk\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:00 crc kubenswrapper[4895]: I0129 16:32:00.939127 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-scripts\") pod \"ceilometer-0\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " pod="openstack/ceilometer-0" Jan 29 16:32:01 crc kubenswrapper[4895]: I0129 16:32:01.008050 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:32:01 crc kubenswrapper[4895]: I0129 16:32:01.056807 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06d3cbc6-fffd-45bd-b5cf-4cbd6d936428" path="/var/lib/kubelet/pods/06d3cbc6-fffd-45bd-b5cf-4cbd6d936428/volumes" Jan 29 16:32:01 crc kubenswrapper[4895]: I0129 16:32:01.491983 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:32:01 crc kubenswrapper[4895]: I0129 16:32:01.504542 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:32:01 crc kubenswrapper[4895]: I0129 16:32:01.515454 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09c98c7-08b6-4e32-b310-d545896b1d5a","Type":"ContainerStarted","Data":"ce2721779155e17fc0a6a6b81a6e98379011720d9251b1ba1725c1d7fa29a402"} Jan 29 16:32:02 crc kubenswrapper[4895]: I0129 16:32:02.533273 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09c98c7-08b6-4e32-b310-d545896b1d5a","Type":"ContainerStarted","Data":"eb564d4bae5d4196379241f0277b03daaa388edf74e16ed1162d33fee7f15749"} Jan 29 16:32:03 crc kubenswrapper[4895]: I0129 16:32:03.547967 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09c98c7-08b6-4e32-b310-d545896b1d5a","Type":"ContainerStarted","Data":"6e15d5b3b141fabcc335de298706f346763e5578a1ec83bf9b4049e4d62369b2"} Jan 29 16:32:03 crc kubenswrapper[4895]: E0129 16:32:03.813945 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:32:03 crc kubenswrapper[4895]: E0129 16:32:03.814179 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccgnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e09c98c7-08b6-4e32-b310-d545896b1d5a): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:03 crc kubenswrapper[4895]: E0129 16:32:03.816293 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" Jan 29 16:32:04 crc kubenswrapper[4895]: I0129 16:32:04.584788 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09c98c7-08b6-4e32-b310-d545896b1d5a","Type":"ContainerStarted","Data":"0b3d01a70d674d61ead5c03dd02af8500213453773a49719f4b87cef0ec20d28"} Jan 29 16:32:04 crc kubenswrapper[4895]: E0129 16:32:04.586681 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" Jan 29 16:32:05 crc kubenswrapper[4895]: E0129 16:32:05.600196 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" Jan 29 16:32:08 crc kubenswrapper[4895]: I0129 16:32:08.936670 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.571461 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-vmblf"] Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.575266 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.578931 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.578986 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.599548 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vmblf"] Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.706134 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.706181 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnc4q\" (UniqueName: \"kubernetes.io/projected/8de7c5e0-275a-4e2e-9451-a653e428b29f-kube-api-access-fnc4q\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.706206 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-scripts\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.706274 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-config-data\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.784744 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.787375 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.789234 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.799821 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.801275 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.804517 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.808330 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.809367 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-config-data\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.809485 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.809515 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnc4q\" (UniqueName: \"kubernetes.io/projected/8de7c5e0-275a-4e2e-9451-a653e428b29f-kube-api-access-fnc4q\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.809536 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-scripts\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.827074 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-scripts\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.827205 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-config-data\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.827251 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.847124 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnc4q\" (UniqueName: \"kubernetes.io/projected/8de7c5e0-275a-4e2e-9451-a653e428b29f-kube-api-access-fnc4q\") pod \"nova-cell0-cell-mapping-vmblf\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.858054 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.901213 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.904442 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.906349 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.910895 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.920261 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dw7d\" (UniqueName: \"kubernetes.io/projected/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-kube-api-access-6dw7d\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.920599 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.920651 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6sjj\" (UniqueName: \"kubernetes.io/projected/5c6dd560-0c97-4573-8925-b59503518911-kube-api-access-s6sjj\") pod \"nova-scheduler-0\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.920824 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-config-data\") pod \"nova-scheduler-0\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.920921 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.920989 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:09 crc kubenswrapper[4895]: I0129 16:32:09.924456 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.017782 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.019732 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.022724 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-config-data\") pod \"nova-scheduler-0\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.022768 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.022805 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.022837 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dw7d\" (UniqueName: \"kubernetes.io/projected/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-kube-api-access-6dw7d\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.022930 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-config-data\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.022975 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.022997 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6sjj\" (UniqueName: \"kubernetes.io/projected/5c6dd560-0c97-4573-8925-b59503518911-kube-api-access-s6sjj\") pod \"nova-scheduler-0\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.023014 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-logs\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.023032 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.023071 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmkv\" (UniqueName: \"kubernetes.io/projected/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-kube-api-access-pdmkv\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.024838 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.034349 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.034500 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.035240 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-config-data\") pod \"nova-scheduler-0\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.043518 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.047388 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.057568 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dw7d\" (UniqueName: \"kubernetes.io/projected/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-kube-api-access-6dw7d\") pod \"nova-cell1-novncproxy-0\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.066950 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6sjj\" (UniqueName: \"kubernetes.io/projected/5c6dd560-0c97-4573-8925-b59503518911-kube-api-access-s6sjj\") pod \"nova-scheduler-0\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.104003 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.124326 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmkv\" (UniqueName: \"kubernetes.io/projected/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-kube-api-access-pdmkv\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.124406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg464\" (UniqueName: \"kubernetes.io/projected/a1e307e3-73b8-48d2-88e3-dd6211b48405-kube-api-access-pg464\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.124535 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-config-data\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.124571 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e307e3-73b8-48d2-88e3-dd6211b48405-logs\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.124605 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-config-data\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.124642 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.124673 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-logs\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.124693 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.129645 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-logs\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.133343 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.134608 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-config-data\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.137263 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.199709 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmkv\" (UniqueName: \"kubernetes.io/projected/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-kube-api-access-pdmkv\") pod \"nova-api-0\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.224502 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-gk6f9"] Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.226932 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg464\" (UniqueName: \"kubernetes.io/projected/a1e307e3-73b8-48d2-88e3-dd6211b48405-kube-api-access-pg464\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.227061 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e307e3-73b8-48d2-88e3-dd6211b48405-logs\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.227094 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-config-data\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.227136 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.227647 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.229110 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e307e3-73b8-48d2-88e3-dd6211b48405-logs\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.258756 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-config-data\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.259529 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.270614 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg464\" (UniqueName: \"kubernetes.io/projected/a1e307e3-73b8-48d2-88e3-dd6211b48405-kube-api-access-pg464\") pod \"nova-metadata-0\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.327039 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.340700 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-config\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.340794 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.341029 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.341124 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79mr7\" (UniqueName: \"kubernetes.io/projected/5726a2e0-7132-4dba-a2e2-5d19e2260f49-kube-api-access-79mr7\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.341231 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.361601 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-gk6f9"] Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.398187 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.445399 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-config\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.447896 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-config\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.454907 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.456123 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.463685 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79mr7\" (UniqueName: \"kubernetes.io/projected/5726a2e0-7132-4dba-a2e2-5d19e2260f49-kube-api-access-79mr7\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.463856 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.464723 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.455804 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.457302 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.492578 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79mr7\" (UniqueName: \"kubernetes.io/projected/5726a2e0-7132-4dba-a2e2-5d19e2260f49-kube-api-access-79mr7\") pod \"dnsmasq-dns-8b8cf6657-gk6f9\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.499827 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vmblf"] Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.612352 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.722786 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vmblf" event={"ID":"8de7c5e0-275a-4e2e-9451-a653e428b29f","Type":"ContainerStarted","Data":"2e4df3078ac2725542e714330d4a06a194ff01caf9e552cb2d76bb67676da85c"} Jan 29 16:32:10 crc kubenswrapper[4895]: I0129 16:32:10.918664 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.030609 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.061549 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.081424 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bt4wg"] Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.084746 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.087054 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.087832 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.089181 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bt4wg"] Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.180900 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.181038 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-config-data\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.181292 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjj52\" (UniqueName: \"kubernetes.io/projected/53c09db6-a699-4b75-9503-08bfc7ad94c1-kube-api-access-kjj52\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.181401 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-scripts\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.225813 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.233824 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-gk6f9"] Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.283145 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-config-data\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.283244 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjj52\" (UniqueName: \"kubernetes.io/projected/53c09db6-a699-4b75-9503-08bfc7ad94c1-kube-api-access-kjj52\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.283284 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-scripts\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.283399 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.287004 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-scripts\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.287159 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.287794 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-config-data\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.302005 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjj52\" (UniqueName: \"kubernetes.io/projected/53c09db6-a699-4b75-9503-08bfc7ad94c1-kube-api-access-kjj52\") pod \"nova-cell1-conductor-db-sync-bt4wg\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.421896 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.742244 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vmblf" event={"ID":"8de7c5e0-275a-4e2e-9451-a653e428b29f","Type":"ContainerStarted","Data":"5c4b8cb083834686bf7c5ccbd359b21df14c20fbf9acc45a2f692c7e8468af56"} Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.748032 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e307e3-73b8-48d2-88e3-dd6211b48405","Type":"ContainerStarted","Data":"2f8ac43fef80df7d0285beeab81f2ad3a01a36f189a98e84999aa253a43b0074"} Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.752213 4895 generic.go:334] "Generic (PLEG): container finished" podID="5726a2e0-7132-4dba-a2e2-5d19e2260f49" containerID="9510b917dbaba86b05ba6290906bbc16befbb1b057872bc52f65f62ecd9b10ea" exitCode=0 Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.752284 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" event={"ID":"5726a2e0-7132-4dba-a2e2-5d19e2260f49","Type":"ContainerDied","Data":"9510b917dbaba86b05ba6290906bbc16befbb1b057872bc52f65f62ecd9b10ea"} Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.752316 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" event={"ID":"5726a2e0-7132-4dba-a2e2-5d19e2260f49","Type":"ContainerStarted","Data":"4df1e7bad77ac8dc6b3159ecf265fbb2df6636f0832147d6a64b1acf77b80f3b"} Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.761052 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5c6dd560-0c97-4573-8925-b59503518911","Type":"ContainerStarted","Data":"5817b9e55ed966ac8fa25516573393a14b1bdeab6e04e2ac269bf9e254eddb75"} Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.762070 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-vmblf" podStartSLOduration=2.762052605 podStartE2EDuration="2.762052605s" podCreationTimestamp="2026-01-29 16:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:11.760457632 +0000 UTC m=+1215.563434896" watchObservedRunningTime="2026-01-29 16:32:11.762052605 +0000 UTC m=+1215.565029879" Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.766189 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f","Type":"ContainerStarted","Data":"6030e49c38c322def988fab9ba8df2edb28721a7fcf7e9075631aeb1141fb570"} Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.771230 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147","Type":"ContainerStarted","Data":"e9c8d6626cae270fecfd5b04314d023c0d999333699daee570850791d07f3fea"} Jan 29 16:32:11 crc kubenswrapper[4895]: I0129 16:32:11.945275 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bt4wg"] Jan 29 16:32:11 crc kubenswrapper[4895]: W0129 16:32:11.988517 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53c09db6_a699_4b75_9503_08bfc7ad94c1.slice/crio-403fbd9f3a4409af7634bcd85cdb3753308ed69691af9e681f3d28f2e927fb9c WatchSource:0}: Error finding container 403fbd9f3a4409af7634bcd85cdb3753308ed69691af9e681f3d28f2e927fb9c: Status 404 returned error can't find the container with id 403fbd9f3a4409af7634bcd85cdb3753308ed69691af9e681f3d28f2e927fb9c Jan 29 16:32:12 crc kubenswrapper[4895]: I0129 16:32:12.785454 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bt4wg" event={"ID":"53c09db6-a699-4b75-9503-08bfc7ad94c1","Type":"ContainerStarted","Data":"403fbd9f3a4409af7634bcd85cdb3753308ed69691af9e681f3d28f2e927fb9c"} Jan 29 16:32:12 crc kubenswrapper[4895]: I0129 16:32:12.791856 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" event={"ID":"5726a2e0-7132-4dba-a2e2-5d19e2260f49","Type":"ContainerStarted","Data":"cc8d956593f75921d938f88868311a00440f056f7e873baba313ff55a79b8f71"} Jan 29 16:32:12 crc kubenswrapper[4895]: I0129 16:32:12.792037 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:12 crc kubenswrapper[4895]: I0129 16:32:12.826783 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" podStartSLOduration=2.826731605 podStartE2EDuration="2.826731605s" podCreationTimestamp="2026-01-29 16:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:12.819576291 +0000 UTC m=+1216.622553565" watchObservedRunningTime="2026-01-29 16:32:12.826731605 +0000 UTC m=+1216.629708889" Jan 29 16:32:13 crc kubenswrapper[4895]: I0129 16:32:13.411226 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:13 crc kubenswrapper[4895]: I0129 16:32:13.423025 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:32:13 crc kubenswrapper[4895]: I0129 16:32:13.815166 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bt4wg" event={"ID":"53c09db6-a699-4b75-9503-08bfc7ad94c1","Type":"ContainerStarted","Data":"0f8410cb12cd607e0e7d025327a65a95e2311d578ae6378d2fb16e40632b8c8f"} Jan 29 16:32:13 crc kubenswrapper[4895]: I0129 16:32:13.848740 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bt4wg" podStartSLOduration=2.848718448 podStartE2EDuration="2.848718448s" podCreationTimestamp="2026-01-29 16:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:13.836591559 +0000 UTC m=+1217.639568833" watchObservedRunningTime="2026-01-29 16:32:13.848718448 +0000 UTC m=+1217.651695712" Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.837182 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147","Type":"ContainerStarted","Data":"24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d"} Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.837640 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147","Type":"ContainerStarted","Data":"7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8"} Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.882078 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e307e3-73b8-48d2-88e3-dd6211b48405","Type":"ContainerStarted","Data":"c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c"} Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.882502 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e307e3-73b8-48d2-88e3-dd6211b48405","Type":"ContainerStarted","Data":"d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec"} Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.882402 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a1e307e3-73b8-48d2-88e3-dd6211b48405" containerName="nova-metadata-metadata" containerID="cri-o://c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c" gracePeriod=30 Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.882246 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a1e307e3-73b8-48d2-88e3-dd6211b48405" containerName="nova-metadata-log" containerID="cri-o://d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec" gracePeriod=30 Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.902739 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5c6dd560-0c97-4573-8925-b59503518911","Type":"ContainerStarted","Data":"8b888e8751e7acb8893779da648b792b69c8f996fbe3f12052ec935cbd8cd535"} Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.913145 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8f29441a4ef791b21d3d103959ba22060b3a44669cac081346a33981d89f7560" gracePeriod=30 Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.913435 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f","Type":"ContainerStarted","Data":"8f29441a4ef791b21d3d103959ba22060b3a44669cac081346a33981d89f7560"} Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.919860 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.102660273 podStartE2EDuration="5.919838703s" podCreationTimestamp="2026-01-29 16:32:09 +0000 UTC" firstStartedPulling="2026-01-29 16:32:11.027265482 +0000 UTC m=+1214.830242756" lastFinishedPulling="2026-01-29 16:32:13.844443922 +0000 UTC m=+1217.647421186" observedRunningTime="2026-01-29 16:32:14.90313041 +0000 UTC m=+1218.706107674" watchObservedRunningTime="2026-01-29 16:32:14.919838703 +0000 UTC m=+1218.722815967" Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.932774 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.120023272 podStartE2EDuration="5.932739582s" podCreationTimestamp="2026-01-29 16:32:09 +0000 UTC" firstStartedPulling="2026-01-29 16:32:11.027657972 +0000 UTC m=+1214.830635246" lastFinishedPulling="2026-01-29 16:32:13.840374272 +0000 UTC m=+1217.643351556" observedRunningTime="2026-01-29 16:32:14.930540323 +0000 UTC m=+1218.733517597" watchObservedRunningTime="2026-01-29 16:32:14.932739582 +0000 UTC m=+1218.735716866" Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.954779 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.342636592 podStartE2EDuration="5.954749849s" podCreationTimestamp="2026-01-29 16:32:09 +0000 UTC" firstStartedPulling="2026-01-29 16:32:11.23103208 +0000 UTC m=+1215.034009344" lastFinishedPulling="2026-01-29 16:32:13.843145317 +0000 UTC m=+1217.646122601" observedRunningTime="2026-01-29 16:32:14.950174324 +0000 UTC m=+1218.753151578" watchObservedRunningTime="2026-01-29 16:32:14.954749849 +0000 UTC m=+1218.757727113" Jan 29 16:32:14 crc kubenswrapper[4895]: I0129 16:32:14.981592 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.074147288 podStartE2EDuration="5.981562404s" podCreationTimestamp="2026-01-29 16:32:09 +0000 UTC" firstStartedPulling="2026-01-29 16:32:10.925685679 +0000 UTC m=+1214.728662943" lastFinishedPulling="2026-01-29 16:32:13.833100795 +0000 UTC m=+1217.636078059" observedRunningTime="2026-01-29 16:32:14.966580929 +0000 UTC m=+1218.769558193" watchObservedRunningTime="2026-01-29 16:32:14.981562404 +0000 UTC m=+1218.784539688" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.105211 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.134737 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.401908 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.401993 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.654889 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.803381 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e307e3-73b8-48d2-88e3-dd6211b48405-logs\") pod \"a1e307e3-73b8-48d2-88e3-dd6211b48405\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.803570 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-combined-ca-bundle\") pod \"a1e307e3-73b8-48d2-88e3-dd6211b48405\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.803618 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-config-data\") pod \"a1e307e3-73b8-48d2-88e3-dd6211b48405\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.803909 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg464\" (UniqueName: \"kubernetes.io/projected/a1e307e3-73b8-48d2-88e3-dd6211b48405-kube-api-access-pg464\") pod \"a1e307e3-73b8-48d2-88e3-dd6211b48405\" (UID: \"a1e307e3-73b8-48d2-88e3-dd6211b48405\") " Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.804098 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1e307e3-73b8-48d2-88e3-dd6211b48405-logs" (OuterVolumeSpecName: "logs") pod "a1e307e3-73b8-48d2-88e3-dd6211b48405" (UID: "a1e307e3-73b8-48d2-88e3-dd6211b48405"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.804457 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e307e3-73b8-48d2-88e3-dd6211b48405-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.823542 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e307e3-73b8-48d2-88e3-dd6211b48405-kube-api-access-pg464" (OuterVolumeSpecName: "kube-api-access-pg464") pod "a1e307e3-73b8-48d2-88e3-dd6211b48405" (UID: "a1e307e3-73b8-48d2-88e3-dd6211b48405"). InnerVolumeSpecName "kube-api-access-pg464". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.838081 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-config-data" (OuterVolumeSpecName: "config-data") pod "a1e307e3-73b8-48d2-88e3-dd6211b48405" (UID: "a1e307e3-73b8-48d2-88e3-dd6211b48405"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.841103 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1e307e3-73b8-48d2-88e3-dd6211b48405" (UID: "a1e307e3-73b8-48d2-88e3-dd6211b48405"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.906689 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.906760 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e307e3-73b8-48d2-88e3-dd6211b48405-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.906778 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg464\" (UniqueName: \"kubernetes.io/projected/a1e307e3-73b8-48d2-88e3-dd6211b48405-kube-api-access-pg464\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.924985 4895 generic.go:334] "Generic (PLEG): container finished" podID="a1e307e3-73b8-48d2-88e3-dd6211b48405" containerID="c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c" exitCode=0 Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.925037 4895 generic.go:334] "Generic (PLEG): container finished" podID="a1e307e3-73b8-48d2-88e3-dd6211b48405" containerID="d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec" exitCode=143 Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.926324 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.937613 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e307e3-73b8-48d2-88e3-dd6211b48405","Type":"ContainerDied","Data":"c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c"} Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.937719 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e307e3-73b8-48d2-88e3-dd6211b48405","Type":"ContainerDied","Data":"d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec"} Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.937733 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e307e3-73b8-48d2-88e3-dd6211b48405","Type":"ContainerDied","Data":"2f8ac43fef80df7d0285beeab81f2ad3a01a36f189a98e84999aa253a43b0074"} Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.937755 4895 scope.go:117] "RemoveContainer" containerID="c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c" Jan 29 16:32:15 crc kubenswrapper[4895]: I0129 16:32:15.983477 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.001203 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.007217 4895 scope.go:117] "RemoveContainer" containerID="d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.024921 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:16 crc kubenswrapper[4895]: E0129 16:32:16.025715 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e307e3-73b8-48d2-88e3-dd6211b48405" containerName="nova-metadata-log" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.025746 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e307e3-73b8-48d2-88e3-dd6211b48405" containerName="nova-metadata-log" Jan 29 16:32:16 crc kubenswrapper[4895]: E0129 16:32:16.025765 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e307e3-73b8-48d2-88e3-dd6211b48405" containerName="nova-metadata-metadata" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.025776 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e307e3-73b8-48d2-88e3-dd6211b48405" containerName="nova-metadata-metadata" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.026041 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e307e3-73b8-48d2-88e3-dd6211b48405" containerName="nova-metadata-log" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.026071 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e307e3-73b8-48d2-88e3-dd6211b48405" containerName="nova-metadata-metadata" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.073503 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.080880 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.081196 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.081703 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.082049 4895 scope.go:117] "RemoveContainer" containerID="c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c" Jan 29 16:32:16 crc kubenswrapper[4895]: E0129 16:32:16.084709 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c\": container with ID starting with c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c not found: ID does not exist" containerID="c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.084783 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c"} err="failed to get container status \"c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c\": rpc error: code = NotFound desc = could not find container \"c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c\": container with ID starting with c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c not found: ID does not exist" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.084819 4895 scope.go:117] "RemoveContainer" containerID="d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec" Jan 29 16:32:16 crc kubenswrapper[4895]: E0129 16:32:16.085371 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec\": container with ID starting with d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec not found: ID does not exist" containerID="d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.085397 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec"} err="failed to get container status \"d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec\": rpc error: code = NotFound desc = could not find container \"d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec\": container with ID starting with d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec not found: ID does not exist" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.085518 4895 scope.go:117] "RemoveContainer" containerID="c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.085790 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c"} err="failed to get container status \"c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c\": rpc error: code = NotFound desc = could not find container \"c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c\": container with ID starting with c9ad38049c1d7ef983440d3c6ee21709baaf78ca40aa68b6c62a3365022c7d4c not found: ID does not exist" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.085822 4895 scope.go:117] "RemoveContainer" containerID="d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.087065 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec"} err="failed to get container status \"d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec\": rpc error: code = NotFound desc = could not find container \"d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec\": container with ID starting with d4aab7f5d2bd1981608f53a51613329bcd134bf6694b689a8794f51ce89b1aec not found: ID does not exist" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.113019 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.113073 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-logs\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.113205 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.113226 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-config-data\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.113247 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22lrj\" (UniqueName: \"kubernetes.io/projected/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-kube-api-access-22lrj\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.214918 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.214966 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-config-data\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.214985 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22lrj\" (UniqueName: \"kubernetes.io/projected/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-kube-api-access-22lrj\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.215059 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.215077 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-logs\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.215576 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-logs\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.221972 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.222140 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.222453 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-config-data\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.235486 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22lrj\" (UniqueName: \"kubernetes.io/projected/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-kube-api-access-22lrj\") pod \"nova-metadata-0\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.408975 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.909136 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:16 crc kubenswrapper[4895]: I0129 16:32:16.943690 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be5d40f4-271c-4e6f-8ac2-6648d64ea21e","Type":"ContainerStarted","Data":"5f895c50ebc2cf59b166357cfea2b727ec253856eb55919fe03dcd3e66b8614d"} Jan 29 16:32:17 crc kubenswrapper[4895]: I0129 16:32:17.050185 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e307e3-73b8-48d2-88e3-dd6211b48405" path="/var/lib/kubelet/pods/a1e307e3-73b8-48d2-88e3-dd6211b48405/volumes" Jan 29 16:32:17 crc kubenswrapper[4895]: I0129 16:32:17.957820 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be5d40f4-271c-4e6f-8ac2-6648d64ea21e","Type":"ContainerStarted","Data":"8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be"} Jan 29 16:32:17 crc kubenswrapper[4895]: I0129 16:32:17.958290 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be5d40f4-271c-4e6f-8ac2-6648d64ea21e","Type":"ContainerStarted","Data":"47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7"} Jan 29 16:32:17 crc kubenswrapper[4895]: I0129 16:32:17.997100 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.997077809 podStartE2EDuration="2.997077809s" podCreationTimestamp="2026-01-29 16:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:17.977244001 +0000 UTC m=+1221.780221265" watchObservedRunningTime="2026-01-29 16:32:17.997077809 +0000 UTC m=+1221.800055083" Jan 29 16:32:18 crc kubenswrapper[4895]: I0129 16:32:18.973851 4895 generic.go:334] "Generic (PLEG): container finished" podID="8de7c5e0-275a-4e2e-9451-a653e428b29f" containerID="5c4b8cb083834686bf7c5ccbd359b21df14c20fbf9acc45a2f692c7e8468af56" exitCode=0 Jan 29 16:32:18 crc kubenswrapper[4895]: I0129 16:32:18.973914 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vmblf" event={"ID":"8de7c5e0-275a-4e2e-9451-a653e428b29f","Type":"ContainerDied","Data":"5c4b8cb083834686bf7c5ccbd359b21df14c20fbf9acc45a2f692c7e8468af56"} Jan 29 16:32:19 crc kubenswrapper[4895]: E0129 16:32:19.171708 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:32:19 crc kubenswrapper[4895]: E0129 16:32:19.171987 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccgnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e09c98c7-08b6-4e32-b310-d545896b1d5a): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:19 crc kubenswrapper[4895]: E0129 16:32:19.173546 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.134961 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.188144 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.334489 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.334555 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.472169 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.520222 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnc4q\" (UniqueName: \"kubernetes.io/projected/8de7c5e0-275a-4e2e-9451-a653e428b29f-kube-api-access-fnc4q\") pod \"8de7c5e0-275a-4e2e-9451-a653e428b29f\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.520405 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-scripts\") pod \"8de7c5e0-275a-4e2e-9451-a653e428b29f\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.520556 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-config-data\") pod \"8de7c5e0-275a-4e2e-9451-a653e428b29f\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.520698 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-combined-ca-bundle\") pod \"8de7c5e0-275a-4e2e-9451-a653e428b29f\" (UID: \"8de7c5e0-275a-4e2e-9451-a653e428b29f\") " Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.528592 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de7c5e0-275a-4e2e-9451-a653e428b29f-kube-api-access-fnc4q" (OuterVolumeSpecName: "kube-api-access-fnc4q") pod "8de7c5e0-275a-4e2e-9451-a653e428b29f" (UID: "8de7c5e0-275a-4e2e-9451-a653e428b29f"). InnerVolumeSpecName "kube-api-access-fnc4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.528590 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-scripts" (OuterVolumeSpecName: "scripts") pod "8de7c5e0-275a-4e2e-9451-a653e428b29f" (UID: "8de7c5e0-275a-4e2e-9451-a653e428b29f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.571409 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8de7c5e0-275a-4e2e-9451-a653e428b29f" (UID: "8de7c5e0-275a-4e2e-9451-a653e428b29f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.578976 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-config-data" (OuterVolumeSpecName: "config-data") pod "8de7c5e0-275a-4e2e-9451-a653e428b29f" (UID: "8de7c5e0-275a-4e2e-9451-a653e428b29f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.615059 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.623057 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.623098 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnc4q\" (UniqueName: \"kubernetes.io/projected/8de7c5e0-275a-4e2e-9451-a653e428b29f-kube-api-access-fnc4q\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.623112 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.623127 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de7c5e0-275a-4e2e-9451-a653e428b29f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.700801 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-pphqd"] Jan 29 16:32:20 crc kubenswrapper[4895]: I0129 16:32:20.701167 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" podUID="a980d03e-2583-427c-8133-5723e0eb7f69" containerName="dnsmasq-dns" containerID="cri-o://e5b03adefa3c5ee1f7eb9afe5c1d21736345c9ca03e9c7b6890311e1e86837f7" gracePeriod=10 Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.008240 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vmblf" event={"ID":"8de7c5e0-275a-4e2e-9451-a653e428b29f","Type":"ContainerDied","Data":"2e4df3078ac2725542e714330d4a06a194ff01caf9e552cb2d76bb67676da85c"} Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.008265 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vmblf" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.008285 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e4df3078ac2725542e714330d4a06a194ff01caf9e552cb2d76bb67676da85c" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.011178 4895 generic.go:334] "Generic (PLEG): container finished" podID="a980d03e-2583-427c-8133-5723e0eb7f69" containerID="e5b03adefa3c5ee1f7eb9afe5c1d21736345c9ca03e9c7b6890311e1e86837f7" exitCode=0 Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.011272 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" event={"ID":"a980d03e-2583-427c-8133-5723e0eb7f69","Type":"ContainerDied","Data":"e5b03adefa3c5ee1f7eb9afe5c1d21736345c9ca03e9c7b6890311e1e86837f7"} Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.013500 4895 generic.go:334] "Generic (PLEG): container finished" podID="53c09db6-a699-4b75-9503-08bfc7ad94c1" containerID="0f8410cb12cd607e0e7d025327a65a95e2311d578ae6378d2fb16e40632b8c8f" exitCode=0 Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.013584 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bt4wg" event={"ID":"53c09db6-a699-4b75-9503-08bfc7ad94c1","Type":"ContainerDied","Data":"0f8410cb12cd607e0e7d025327a65a95e2311d578ae6378d2fb16e40632b8c8f"} Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.089921 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.203774 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.204589 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerName="nova-api-log" containerID="cri-o://7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8" gracePeriod=30 Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.204698 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerName="nova-api-api" containerID="cri-o://24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d" gracePeriod=30 Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.219132 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.174:8774/\": EOF" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.220709 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.225403 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.174:8774/\": EOF" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.258404 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.259272 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be5d40f4-271c-4e6f-8ac2-6648d64ea21e" containerName="nova-metadata-metadata" containerID="cri-o://8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be" gracePeriod=30 Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.259211 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be5d40f4-271c-4e6f-8ac2-6648d64ea21e" containerName="nova-metadata-log" containerID="cri-o://47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7" gracePeriod=30 Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.333845 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-sb\") pod \"a980d03e-2583-427c-8133-5723e0eb7f69\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.333926 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-dns-svc\") pod \"a980d03e-2583-427c-8133-5723e0eb7f69\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.334003 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-config\") pod \"a980d03e-2583-427c-8133-5723e0eb7f69\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.334042 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-nb\") pod \"a980d03e-2583-427c-8133-5723e0eb7f69\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.334152 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mdc7\" (UniqueName: \"kubernetes.io/projected/a980d03e-2583-427c-8133-5723e0eb7f69-kube-api-access-5mdc7\") pod \"a980d03e-2583-427c-8133-5723e0eb7f69\" (UID: \"a980d03e-2583-427c-8133-5723e0eb7f69\") " Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.339636 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a980d03e-2583-427c-8133-5723e0eb7f69-kube-api-access-5mdc7" (OuterVolumeSpecName: "kube-api-access-5mdc7") pod "a980d03e-2583-427c-8133-5723e0eb7f69" (UID: "a980d03e-2583-427c-8133-5723e0eb7f69"). InnerVolumeSpecName "kube-api-access-5mdc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.385068 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a980d03e-2583-427c-8133-5723e0eb7f69" (UID: "a980d03e-2583-427c-8133-5723e0eb7f69"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.389733 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a980d03e-2583-427c-8133-5723e0eb7f69" (UID: "a980d03e-2583-427c-8133-5723e0eb7f69"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.398799 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-config" (OuterVolumeSpecName: "config") pod "a980d03e-2583-427c-8133-5723e0eb7f69" (UID: "a980d03e-2583-427c-8133-5723e0eb7f69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.402467 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a980d03e-2583-427c-8133-5723e0eb7f69" (UID: "a980d03e-2583-427c-8133-5723e0eb7f69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.409096 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.409158 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.436577 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mdc7\" (UniqueName: \"kubernetes.io/projected/a980d03e-2583-427c-8133-5723e0eb7f69-kube-api-access-5mdc7\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.437773 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.437882 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.437960 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.438021 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a980d03e-2583-427c-8133-5723e0eb7f69-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.588061 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:32:21 crc kubenswrapper[4895]: I0129 16:32:21.882813 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.026795 4895 generic.go:334] "Generic (PLEG): container finished" podID="be5d40f4-271c-4e6f-8ac2-6648d64ea21e" containerID="8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be" exitCode=0 Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.026830 4895 generic.go:334] "Generic (PLEG): container finished" podID="be5d40f4-271c-4e6f-8ac2-6648d64ea21e" containerID="47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7" exitCode=143 Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.026966 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.027727 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be5d40f4-271c-4e6f-8ac2-6648d64ea21e","Type":"ContainerDied","Data":"8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be"} Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.027779 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be5d40f4-271c-4e6f-8ac2-6648d64ea21e","Type":"ContainerDied","Data":"47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7"} Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.027792 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be5d40f4-271c-4e6f-8ac2-6648d64ea21e","Type":"ContainerDied","Data":"5f895c50ebc2cf59b166357cfea2b727ec253856eb55919fe03dcd3e66b8614d"} Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.027812 4895 scope.go:117] "RemoveContainer" containerID="8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.030126 4895 generic.go:334] "Generic (PLEG): container finished" podID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerID="7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8" exitCode=143 Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.030194 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147","Type":"ContainerDied","Data":"7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8"} Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.032463 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.033964 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-pphqd" event={"ID":"a980d03e-2583-427c-8133-5723e0eb7f69","Type":"ContainerDied","Data":"7707ff386c2ba18e3512c693422c68d7157cb7f114425b39ff3a622458be5bbd"} Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.051328 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-combined-ca-bundle\") pod \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.051488 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-logs\") pod \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.051588 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-nova-metadata-tls-certs\") pod \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.051647 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-config-data\") pod \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.052139 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-logs" (OuterVolumeSpecName: "logs") pod "be5d40f4-271c-4e6f-8ac2-6648d64ea21e" (UID: "be5d40f4-271c-4e6f-8ac2-6648d64ea21e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.052857 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22lrj\" (UniqueName: \"kubernetes.io/projected/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-kube-api-access-22lrj\") pod \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\" (UID: \"be5d40f4-271c-4e6f-8ac2-6648d64ea21e\") " Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.053728 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.058327 4895 scope.go:117] "RemoveContainer" containerID="47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.059262 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-kube-api-access-22lrj" (OuterVolumeSpecName: "kube-api-access-22lrj") pod "be5d40f4-271c-4e6f-8ac2-6648d64ea21e" (UID: "be5d40f4-271c-4e6f-8ac2-6648d64ea21e"). InnerVolumeSpecName "kube-api-access-22lrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.085990 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-pphqd"] Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.092316 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be5d40f4-271c-4e6f-8ac2-6648d64ea21e" (UID: "be5d40f4-271c-4e6f-8ac2-6648d64ea21e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.093096 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-config-data" (OuterVolumeSpecName: "config-data") pod "be5d40f4-271c-4e6f-8ac2-6648d64ea21e" (UID: "be5d40f4-271c-4e6f-8ac2-6648d64ea21e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.097703 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-pphqd"] Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.141293 4895 scope.go:117] "RemoveContainer" containerID="8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be" Jan 29 16:32:22 crc kubenswrapper[4895]: E0129 16:32:22.143237 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be\": container with ID starting with 8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be not found: ID does not exist" containerID="8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.143292 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be"} err="failed to get container status \"8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be\": rpc error: code = NotFound desc = could not find container \"8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be\": container with ID starting with 8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be not found: ID does not exist" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.143353 4895 scope.go:117] "RemoveContainer" containerID="47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7" Jan 29 16:32:22 crc kubenswrapper[4895]: E0129 16:32:22.143985 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7\": container with ID starting with 47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7 not found: ID does not exist" containerID="47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.144017 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7"} err="failed to get container status \"47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7\": rpc error: code = NotFound desc = could not find container \"47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7\": container with ID starting with 47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7 not found: ID does not exist" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.144062 4895 scope.go:117] "RemoveContainer" containerID="8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.144626 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be"} err="failed to get container status \"8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be\": rpc error: code = NotFound desc = could not find container \"8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be\": container with ID starting with 8a6308db3ce727b2992a06606b706b4f759ddad55ca6eb8d70f81121ce73c2be not found: ID does not exist" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.144691 4895 scope.go:117] "RemoveContainer" containerID="47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.145621 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7"} err="failed to get container status \"47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7\": rpc error: code = NotFound desc = could not find container \"47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7\": container with ID starting with 47e205f9e18031f08a8eedca3dae328a36509a1ba38bdefde4a654f4ac16e4f7 not found: ID does not exist" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.145676 4895 scope.go:117] "RemoveContainer" containerID="e5b03adefa3c5ee1f7eb9afe5c1d21736345c9ca03e9c7b6890311e1e86837f7" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.146634 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "be5d40f4-271c-4e6f-8ac2-6648d64ea21e" (UID: "be5d40f4-271c-4e6f-8ac2-6648d64ea21e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.155833 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.155856 4895 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.155880 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.156004 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22lrj\" (UniqueName: \"kubernetes.io/projected/be5d40f4-271c-4e6f-8ac2-6648d64ea21e-kube-api-access-22lrj\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.171532 4895 scope.go:117] "RemoveContainer" containerID="f56367ad6048ab35e65daf3cd990ca0a7306114f58959ae54ff9e6d7574f844f" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.371565 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.382698 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.403785 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:22 crc kubenswrapper[4895]: E0129 16:32:22.404309 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a980d03e-2583-427c-8133-5723e0eb7f69" containerName="dnsmasq-dns" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.404337 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a980d03e-2583-427c-8133-5723e0eb7f69" containerName="dnsmasq-dns" Jan 29 16:32:22 crc kubenswrapper[4895]: E0129 16:32:22.404371 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de7c5e0-275a-4e2e-9451-a653e428b29f" containerName="nova-manage" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.404380 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de7c5e0-275a-4e2e-9451-a653e428b29f" containerName="nova-manage" Jan 29 16:32:22 crc kubenswrapper[4895]: E0129 16:32:22.404395 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a980d03e-2583-427c-8133-5723e0eb7f69" containerName="init" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.404403 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a980d03e-2583-427c-8133-5723e0eb7f69" containerName="init" Jan 29 16:32:22 crc kubenswrapper[4895]: E0129 16:32:22.404419 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be5d40f4-271c-4e6f-8ac2-6648d64ea21e" containerName="nova-metadata-log" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.404427 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5d40f4-271c-4e6f-8ac2-6648d64ea21e" containerName="nova-metadata-log" Jan 29 16:32:22 crc kubenswrapper[4895]: E0129 16:32:22.404456 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be5d40f4-271c-4e6f-8ac2-6648d64ea21e" containerName="nova-metadata-metadata" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.404464 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5d40f4-271c-4e6f-8ac2-6648d64ea21e" containerName="nova-metadata-metadata" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.404665 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="be5d40f4-271c-4e6f-8ac2-6648d64ea21e" containerName="nova-metadata-metadata" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.404691 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a980d03e-2583-427c-8133-5723e0eb7f69" containerName="dnsmasq-dns" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.404708 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de7c5e0-275a-4e2e-9451-a653e428b29f" containerName="nova-manage" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.404720 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="be5d40f4-271c-4e6f-8ac2-6648d64ea21e" containerName="nova-metadata-log" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.405810 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.411243 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.412833 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.425566 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.535757 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.572314 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tqpl\" (UniqueName: \"kubernetes.io/projected/7b879ebd-b686-4535-aa46-94baaa9c0ae7-kube-api-access-6tqpl\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.572367 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-config-data\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.572406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.572433 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.572573 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b879ebd-b686-4535-aa46-94baaa9c0ae7-logs\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.673493 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-config-data\") pod \"53c09db6-a699-4b75-9503-08bfc7ad94c1\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.673639 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjj52\" (UniqueName: \"kubernetes.io/projected/53c09db6-a699-4b75-9503-08bfc7ad94c1-kube-api-access-kjj52\") pod \"53c09db6-a699-4b75-9503-08bfc7ad94c1\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.673739 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-combined-ca-bundle\") pod \"53c09db6-a699-4b75-9503-08bfc7ad94c1\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.673769 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-scripts\") pod \"53c09db6-a699-4b75-9503-08bfc7ad94c1\" (UID: \"53c09db6-a699-4b75-9503-08bfc7ad94c1\") " Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.674072 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b879ebd-b686-4535-aa46-94baaa9c0ae7-logs\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.674248 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tqpl\" (UniqueName: \"kubernetes.io/projected/7b879ebd-b686-4535-aa46-94baaa9c0ae7-kube-api-access-6tqpl\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.674282 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-config-data\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.674321 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.674350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.678179 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b879ebd-b686-4535-aa46-94baaa9c0ae7-logs\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.685398 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c09db6-a699-4b75-9503-08bfc7ad94c1-kube-api-access-kjj52" (OuterVolumeSpecName: "kube-api-access-kjj52") pod "53c09db6-a699-4b75-9503-08bfc7ad94c1" (UID: "53c09db6-a699-4b75-9503-08bfc7ad94c1"). InnerVolumeSpecName "kube-api-access-kjj52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.686080 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-scripts" (OuterVolumeSpecName: "scripts") pod "53c09db6-a699-4b75-9503-08bfc7ad94c1" (UID: "53c09db6-a699-4b75-9503-08bfc7ad94c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.688167 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.688270 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.690805 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-config-data\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.698442 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tqpl\" (UniqueName: \"kubernetes.io/projected/7b879ebd-b686-4535-aa46-94baaa9c0ae7-kube-api-access-6tqpl\") pod \"nova-metadata-0\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.724434 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53c09db6-a699-4b75-9503-08bfc7ad94c1" (UID: "53c09db6-a699-4b75-9503-08bfc7ad94c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.745621 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-config-data" (OuterVolumeSpecName: "config-data") pod "53c09db6-a699-4b75-9503-08bfc7ad94c1" (UID: "53c09db6-a699-4b75-9503-08bfc7ad94c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.750679 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.777494 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.777541 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.777549 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c09db6-a699-4b75-9503-08bfc7ad94c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:22 crc kubenswrapper[4895]: I0129 16:32:22.777562 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjj52\" (UniqueName: \"kubernetes.io/projected/53c09db6-a699-4b75-9503-08bfc7ad94c1-kube-api-access-kjj52\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.052957 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bt4wg" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.053040 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5c6dd560-0c97-4573-8925-b59503518911" containerName="nova-scheduler-scheduler" containerID="cri-o://8b888e8751e7acb8893779da648b792b69c8f996fbe3f12052ec935cbd8cd535" gracePeriod=30 Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.057012 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a980d03e-2583-427c-8133-5723e0eb7f69" path="/var/lib/kubelet/pods/a980d03e-2583-427c-8133-5723e0eb7f69/volumes" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.058605 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be5d40f4-271c-4e6f-8ac2-6648d64ea21e" path="/var/lib/kubelet/pods/be5d40f4-271c-4e6f-8ac2-6648d64ea21e/volumes" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.059540 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bt4wg" event={"ID":"53c09db6-a699-4b75-9503-08bfc7ad94c1","Type":"ContainerDied","Data":"403fbd9f3a4409af7634bcd85cdb3753308ed69691af9e681f3d28f2e927fb9c"} Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.059577 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="403fbd9f3a4409af7634bcd85cdb3753308ed69691af9e681f3d28f2e927fb9c" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.146703 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:32:23 crc kubenswrapper[4895]: E0129 16:32:23.148702 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c09db6-a699-4b75-9503-08bfc7ad94c1" containerName="nova-cell1-conductor-db-sync" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.148728 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c09db6-a699-4b75-9503-08bfc7ad94c1" containerName="nova-cell1-conductor-db-sync" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.148923 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c09db6-a699-4b75-9503-08bfc7ad94c1" containerName="nova-cell1-conductor-db-sync" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.149557 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.152772 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.166570 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.260439 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:32:23 crc kubenswrapper[4895]: W0129 16:32:23.272461 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b879ebd_b686_4535_aa46_94baaa9c0ae7.slice/crio-13773b8c01dae1f9afb9637e7b9a7a163473a12f1b8988c1086ea2279e081e11 WatchSource:0}: Error finding container 13773b8c01dae1f9afb9637e7b9a7a163473a12f1b8988c1086ea2279e081e11: Status 404 returned error can't find the container with id 13773b8c01dae1f9afb9637e7b9a7a163473a12f1b8988c1086ea2279e081e11 Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.290751 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55543556-df47-452c-8436-353ddc374f3f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"55543556-df47-452c-8436-353ddc374f3f\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.290997 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55543556-df47-452c-8436-353ddc374f3f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"55543556-df47-452c-8436-353ddc374f3f\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.291250 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvblg\" (UniqueName: \"kubernetes.io/projected/55543556-df47-452c-8436-353ddc374f3f-kube-api-access-nvblg\") pod \"nova-cell1-conductor-0\" (UID: \"55543556-df47-452c-8436-353ddc374f3f\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.393023 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55543556-df47-452c-8436-353ddc374f3f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"55543556-df47-452c-8436-353ddc374f3f\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.393461 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvblg\" (UniqueName: \"kubernetes.io/projected/55543556-df47-452c-8436-353ddc374f3f-kube-api-access-nvblg\") pod \"nova-cell1-conductor-0\" (UID: \"55543556-df47-452c-8436-353ddc374f3f\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.393551 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55543556-df47-452c-8436-353ddc374f3f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"55543556-df47-452c-8436-353ddc374f3f\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.404710 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55543556-df47-452c-8436-353ddc374f3f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"55543556-df47-452c-8436-353ddc374f3f\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.404742 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55543556-df47-452c-8436-353ddc374f3f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"55543556-df47-452c-8436-353ddc374f3f\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.444169 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvblg\" (UniqueName: \"kubernetes.io/projected/55543556-df47-452c-8436-353ddc374f3f-kube-api-access-nvblg\") pod \"nova-cell1-conductor-0\" (UID: \"55543556-df47-452c-8436-353ddc374f3f\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:23 crc kubenswrapper[4895]: I0129 16:32:23.482185 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:24 crc kubenswrapper[4895]: I0129 16:32:24.020368 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:32:24 crc kubenswrapper[4895]: W0129 16:32:24.035984 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55543556_df47_452c_8436_353ddc374f3f.slice/crio-e91fbe1eead1259abbc2d7d5ecb8ceab7b4ceee255fec2640f3e4902dd65b6a3 WatchSource:0}: Error finding container e91fbe1eead1259abbc2d7d5ecb8ceab7b4ceee255fec2640f3e4902dd65b6a3: Status 404 returned error can't find the container with id e91fbe1eead1259abbc2d7d5ecb8ceab7b4ceee255fec2640f3e4902dd65b6a3 Jan 29 16:32:24 crc kubenswrapper[4895]: I0129 16:32:24.067328 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b879ebd-b686-4535-aa46-94baaa9c0ae7","Type":"ContainerStarted","Data":"4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721"} Jan 29 16:32:24 crc kubenswrapper[4895]: I0129 16:32:24.067390 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b879ebd-b686-4535-aa46-94baaa9c0ae7","Type":"ContainerStarted","Data":"d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c"} Jan 29 16:32:24 crc kubenswrapper[4895]: I0129 16:32:24.067401 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b879ebd-b686-4535-aa46-94baaa9c0ae7","Type":"ContainerStarted","Data":"13773b8c01dae1f9afb9637e7b9a7a163473a12f1b8988c1086ea2279e081e11"} Jan 29 16:32:24 crc kubenswrapper[4895]: I0129 16:32:24.074923 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"55543556-df47-452c-8436-353ddc374f3f","Type":"ContainerStarted","Data":"e91fbe1eead1259abbc2d7d5ecb8ceab7b4ceee255fec2640f3e4902dd65b6a3"} Jan 29 16:32:24 crc kubenswrapper[4895]: I0129 16:32:24.096467 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.096443577 podStartE2EDuration="2.096443577s" podCreationTimestamp="2026-01-29 16:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:24.089939901 +0000 UTC m=+1227.892917165" watchObservedRunningTime="2026-01-29 16:32:24.096443577 +0000 UTC m=+1227.899420851" Jan 29 16:32:25 crc kubenswrapper[4895]: I0129 16:32:25.088021 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"55543556-df47-452c-8436-353ddc374f3f","Type":"ContainerStarted","Data":"e086a8a5a3e61e56e469385c84468ead3ddbc7452dcb110ada8e230d3e671051"} Jan 29 16:32:25 crc kubenswrapper[4895]: I0129 16:32:25.088463 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:25 crc kubenswrapper[4895]: I0129 16:32:25.112771 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.112752266 podStartE2EDuration="2.112752266s" podCreationTimestamp="2026-01-29 16:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:25.105607992 +0000 UTC m=+1228.908585266" watchObservedRunningTime="2026-01-29 16:32:25.112752266 +0000 UTC m=+1228.915729530" Jan 29 16:32:25 crc kubenswrapper[4895]: E0129 16:32:25.137120 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8b888e8751e7acb8893779da648b792b69c8f996fbe3f12052ec935cbd8cd535" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:32:25 crc kubenswrapper[4895]: E0129 16:32:25.139338 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8b888e8751e7acb8893779da648b792b69c8f996fbe3f12052ec935cbd8cd535" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:32:25 crc kubenswrapper[4895]: E0129 16:32:25.141382 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8b888e8751e7acb8893779da648b792b69c8f996fbe3f12052ec935cbd8cd535" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:32:25 crc kubenswrapper[4895]: E0129 16:32:25.141439 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="5c6dd560-0c97-4573-8925-b59503518911" containerName="nova-scheduler-scheduler" Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.100885 4895 generic.go:334] "Generic (PLEG): container finished" podID="5c6dd560-0c97-4573-8925-b59503518911" containerID="8b888e8751e7acb8893779da648b792b69c8f996fbe3f12052ec935cbd8cd535" exitCode=0 Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.102338 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5c6dd560-0c97-4573-8925-b59503518911","Type":"ContainerDied","Data":"8b888e8751e7acb8893779da648b792b69c8f996fbe3f12052ec935cbd8cd535"} Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.280997 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.363191 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-config-data\") pod \"5c6dd560-0c97-4573-8925-b59503518911\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.363342 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6sjj\" (UniqueName: \"kubernetes.io/projected/5c6dd560-0c97-4573-8925-b59503518911-kube-api-access-s6sjj\") pod \"5c6dd560-0c97-4573-8925-b59503518911\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.363384 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-combined-ca-bundle\") pod \"5c6dd560-0c97-4573-8925-b59503518911\" (UID: \"5c6dd560-0c97-4573-8925-b59503518911\") " Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.387296 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c6dd560-0c97-4573-8925-b59503518911-kube-api-access-s6sjj" (OuterVolumeSpecName: "kube-api-access-s6sjj") pod "5c6dd560-0c97-4573-8925-b59503518911" (UID: "5c6dd560-0c97-4573-8925-b59503518911"). InnerVolumeSpecName "kube-api-access-s6sjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.401094 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c6dd560-0c97-4573-8925-b59503518911" (UID: "5c6dd560-0c97-4573-8925-b59503518911"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.405302 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-config-data" (OuterVolumeSpecName: "config-data") pod "5c6dd560-0c97-4573-8925-b59503518911" (UID: "5c6dd560-0c97-4573-8925-b59503518911"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.465359 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.465396 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6sjj\" (UniqueName: \"kubernetes.io/projected/5c6dd560-0c97-4573-8925-b59503518911-kube-api-access-s6sjj\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:26 crc kubenswrapper[4895]: I0129 16:32:26.465409 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6dd560-0c97-4573-8925-b59503518911-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.127615 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5c6dd560-0c97-4573-8925-b59503518911","Type":"ContainerDied","Data":"5817b9e55ed966ac8fa25516573393a14b1bdeab6e04e2ac269bf9e254eddb75"} Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.127682 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.127735 4895 scope.go:117] "RemoveContainer" containerID="8b888e8751e7acb8893779da648b792b69c8f996fbe3f12052ec935cbd8cd535" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.160334 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.169293 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.182640 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:32:27 crc kubenswrapper[4895]: E0129 16:32:27.183771 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6dd560-0c97-4573-8925-b59503518911" containerName="nova-scheduler-scheduler" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.183794 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6dd560-0c97-4573-8925-b59503518911" containerName="nova-scheduler-scheduler" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.184230 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6dd560-0c97-4573-8925-b59503518911" containerName="nova-scheduler-scheduler" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.185722 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.188779 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.205129 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.281771 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.281852 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gvhz\" (UniqueName: \"kubernetes.io/projected/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-kube-api-access-5gvhz\") pod \"nova-scheduler-0\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.281943 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-config-data\") pod \"nova-scheduler-0\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.383381 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gvhz\" (UniqueName: \"kubernetes.io/projected/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-kube-api-access-5gvhz\") pod \"nova-scheduler-0\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.384128 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-config-data\") pod \"nova-scheduler-0\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.384362 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.392522 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-config-data\") pod \"nova-scheduler-0\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.394641 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.406890 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gvhz\" (UniqueName: \"kubernetes.io/projected/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-kube-api-access-5gvhz\") pod \"nova-scheduler-0\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.518100 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.767846 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.768222 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.823340 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:32:27 crc kubenswrapper[4895]: I0129 16:32:27.823415 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.042931 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:32:28 crc kubenswrapper[4895]: W0129 16:32:28.051002 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d3b4c1a_b5fa_4816_9653_2e7841f39dce.slice/crio-b4aae2a56819fc343de501859958dfd422e8764efec03f01f93d41070f1c17e1 WatchSource:0}: Error finding container b4aae2a56819fc343de501859958dfd422e8764efec03f01f93d41070f1c17e1: Status 404 returned error can't find the container with id b4aae2a56819fc343de501859958dfd422e8764efec03f01f93d41070f1c17e1 Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.120605 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.144558 4895 generic.go:334] "Generic (PLEG): container finished" podID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerID="24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d" exitCode=0 Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.144966 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.144992 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147","Type":"ContainerDied","Data":"24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d"} Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.145727 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147","Type":"ContainerDied","Data":"e9c8d6626cae270fecfd5b04314d023c0d999333699daee570850791d07f3fea"} Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.145771 4895 scope.go:117] "RemoveContainer" containerID="24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.152134 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7d3b4c1a-b5fa-4816-9653-2e7841f39dce","Type":"ContainerStarted","Data":"b4aae2a56819fc343de501859958dfd422e8764efec03f01f93d41070f1c17e1"} Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.192339 4895 scope.go:117] "RemoveContainer" containerID="7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.199453 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-combined-ca-bundle\") pod \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.199599 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-logs\") pod \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.199652 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-config-data\") pod \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.199805 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdmkv\" (UniqueName: \"kubernetes.io/projected/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-kube-api-access-pdmkv\") pod \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\" (UID: \"1266eaf4-ca85-4ce4-bbbf-b288f2bbb147\") " Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.200585 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-logs" (OuterVolumeSpecName: "logs") pod "1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" (UID: "1266eaf4-ca85-4ce4-bbbf-b288f2bbb147"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.205417 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-kube-api-access-pdmkv" (OuterVolumeSpecName: "kube-api-access-pdmkv") pod "1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" (UID: "1266eaf4-ca85-4ce4-bbbf-b288f2bbb147"). InnerVolumeSpecName "kube-api-access-pdmkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.215839 4895 scope.go:117] "RemoveContainer" containerID="24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d" Jan 29 16:32:28 crc kubenswrapper[4895]: E0129 16:32:28.216283 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d\": container with ID starting with 24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d not found: ID does not exist" containerID="24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.216333 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d"} err="failed to get container status \"24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d\": rpc error: code = NotFound desc = could not find container \"24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d\": container with ID starting with 24edc236eeae29d7061806d3a1ccbd5ce245e20eda7e9116b29f260df9f8902d not found: ID does not exist" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.216362 4895 scope.go:117] "RemoveContainer" containerID="7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8" Jan 29 16:32:28 crc kubenswrapper[4895]: E0129 16:32:28.217053 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8\": container with ID starting with 7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8 not found: ID does not exist" containerID="7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.217216 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8"} err="failed to get container status \"7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8\": rpc error: code = NotFound desc = could not find container \"7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8\": container with ID starting with 7b69e5fe2b28a65f48f84536c5560b4dc81739d53f05c28e74665da6db3791a8 not found: ID does not exist" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.243654 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" (UID: "1266eaf4-ca85-4ce4-bbbf-b288f2bbb147"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.254990 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-config-data" (OuterVolumeSpecName: "config-data") pod "1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" (UID: "1266eaf4-ca85-4ce4-bbbf-b288f2bbb147"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.302123 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.302162 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.302173 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.302183 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdmkv\" (UniqueName: \"kubernetes.io/projected/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147-kube-api-access-pdmkv\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.481374 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.491575 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.502957 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:28 crc kubenswrapper[4895]: E0129 16:32:28.503470 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerName="nova-api-log" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.503493 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerName="nova-api-log" Jan 29 16:32:28 crc kubenswrapper[4895]: E0129 16:32:28.503507 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerName="nova-api-api" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.503514 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerName="nova-api-api" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.503749 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerName="nova-api-api" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.503780 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" containerName="nova-api-log" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.505181 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.508118 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.529197 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.611926 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q6wm\" (UniqueName: \"kubernetes.io/projected/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-kube-api-access-9q6wm\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.612453 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-logs\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.612653 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-config-data\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.612820 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.714527 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.714660 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q6wm\" (UniqueName: \"kubernetes.io/projected/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-kube-api-access-9q6wm\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.715239 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-logs\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.719916 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.722354 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-logs\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.722674 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-config-data\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.729252 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-config-data\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.742830 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q6wm\" (UniqueName: \"kubernetes.io/projected/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-kube-api-access-9q6wm\") pod \"nova-api-0\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " pod="openstack/nova-api-0" Jan 29 16:32:28 crc kubenswrapper[4895]: I0129 16:32:28.837367 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:32:29 crc kubenswrapper[4895]: I0129 16:32:29.050818 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1266eaf4-ca85-4ce4-bbbf-b288f2bbb147" path="/var/lib/kubelet/pods/1266eaf4-ca85-4ce4-bbbf-b288f2bbb147/volumes" Jan 29 16:32:29 crc kubenswrapper[4895]: I0129 16:32:29.052125 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c6dd560-0c97-4573-8925-b59503518911" path="/var/lib/kubelet/pods/5c6dd560-0c97-4573-8925-b59503518911/volumes" Jan 29 16:32:29 crc kubenswrapper[4895]: I0129 16:32:29.178173 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7d3b4c1a-b5fa-4816-9653-2e7841f39dce","Type":"ContainerStarted","Data":"f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027"} Jan 29 16:32:29 crc kubenswrapper[4895]: I0129 16:32:29.216676 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.216650541 podStartE2EDuration="2.216650541s" podCreationTimestamp="2026-01-29 16:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:29.202576572 +0000 UTC m=+1233.005553846" watchObservedRunningTime="2026-01-29 16:32:29.216650541 +0000 UTC m=+1233.019627835" Jan 29 16:32:29 crc kubenswrapper[4895]: I0129 16:32:29.377857 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:29 crc kubenswrapper[4895]: W0129 16:32:29.378887 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5f276e4_43c0_4ac6_a057_1e36cbf150d5.slice/crio-f47044ec652b5f8aac5fdba69a5eb76c84f5c648a527b169c3afcce62585fe8a WatchSource:0}: Error finding container f47044ec652b5f8aac5fdba69a5eb76c84f5c648a527b169c3afcce62585fe8a: Status 404 returned error can't find the container with id f47044ec652b5f8aac5fdba69a5eb76c84f5c648a527b169c3afcce62585fe8a Jan 29 16:32:30 crc kubenswrapper[4895]: I0129 16:32:30.206733 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5f276e4-43c0-4ac6-a057-1e36cbf150d5","Type":"ContainerStarted","Data":"70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618"} Jan 29 16:32:30 crc kubenswrapper[4895]: I0129 16:32:30.207461 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5f276e4-43c0-4ac6-a057-1e36cbf150d5","Type":"ContainerStarted","Data":"4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440"} Jan 29 16:32:30 crc kubenswrapper[4895]: I0129 16:32:30.207504 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5f276e4-43c0-4ac6-a057-1e36cbf150d5","Type":"ContainerStarted","Data":"f47044ec652b5f8aac5fdba69a5eb76c84f5c648a527b169c3afcce62585fe8a"} Jan 29 16:32:30 crc kubenswrapper[4895]: I0129 16:32:30.239691 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.239667161 podStartE2EDuration="2.239667161s" podCreationTimestamp="2026-01-29 16:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:30.232085152 +0000 UTC m=+1234.035062426" watchObservedRunningTime="2026-01-29 16:32:30.239667161 +0000 UTC m=+1234.042644435" Jan 29 16:32:32 crc kubenswrapper[4895]: I0129 16:32:32.518364 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 16:32:32 crc kubenswrapper[4895]: I0129 16:32:32.752073 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:32:32 crc kubenswrapper[4895]: I0129 16:32:32.752157 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:32:33 crc kubenswrapper[4895]: E0129 16:32:33.040401 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" Jan 29 16:32:33 crc kubenswrapper[4895]: I0129 16:32:33.519371 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 16:32:33 crc kubenswrapper[4895]: I0129 16:32:33.767109 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:32:33 crc kubenswrapper[4895]: I0129 16:32:33.767197 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:32:37 crc kubenswrapper[4895]: I0129 16:32:37.518700 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 16:32:37 crc kubenswrapper[4895]: I0129 16:32:37.554801 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 16:32:38 crc kubenswrapper[4895]: I0129 16:32:38.369735 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 16:32:38 crc kubenswrapper[4895]: I0129 16:32:38.839554 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:32:38 crc kubenswrapper[4895]: I0129 16:32:38.839616 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:32:39 crc kubenswrapper[4895]: I0129 16:32:39.921545 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:32:39 crc kubenswrapper[4895]: I0129 16:32:39.921955 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:32:42 crc kubenswrapper[4895]: I0129 16:32:42.804165 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:32:42 crc kubenswrapper[4895]: I0129 16:32:42.807419 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:32:42 crc kubenswrapper[4895]: I0129 16:32:42.813718 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:32:43 crc kubenswrapper[4895]: I0129 16:32:43.398807 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.417794 4895 generic.go:334] "Generic (PLEG): container finished" podID="ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f" containerID="8f29441a4ef791b21d3d103959ba22060b3a44669cac081346a33981d89f7560" exitCode=137 Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.418099 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f","Type":"ContainerDied","Data":"8f29441a4ef791b21d3d103959ba22060b3a44669cac081346a33981d89f7560"} Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.621765 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.822322 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dw7d\" (UniqueName: \"kubernetes.io/projected/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-kube-api-access-6dw7d\") pod \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.822929 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-combined-ca-bundle\") pod \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.823038 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-config-data\") pod \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\" (UID: \"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f\") " Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.829712 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-kube-api-access-6dw7d" (OuterVolumeSpecName: "kube-api-access-6dw7d") pod "ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f" (UID: "ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f"). InnerVolumeSpecName "kube-api-access-6dw7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.864336 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-config-data" (OuterVolumeSpecName: "config-data") pod "ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f" (UID: "ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.870643 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f" (UID: "ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.927533 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dw7d\" (UniqueName: \"kubernetes.io/projected/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-kube-api-access-6dw7d\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.927580 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:45 crc kubenswrapper[4895]: I0129 16:32:45.927593 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.434656 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f","Type":"ContainerDied","Data":"6030e49c38c322def988fab9ba8df2edb28721a7fcf7e9075631aeb1141fb570"} Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.434726 4895 scope.go:117] "RemoveContainer" containerID="8f29441a4ef791b21d3d103959ba22060b3a44669cac081346a33981d89f7560" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.434790 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.512761 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.528837 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.551822 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:32:46 crc kubenswrapper[4895]: E0129 16:32:46.552425 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.552450 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.552638 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.553549 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.556907 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.558743 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.559009 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.575826 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.644934 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.645145 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.645216 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nlkz\" (UniqueName: \"kubernetes.io/projected/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-kube-api-access-8nlkz\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.645278 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.645302 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.748626 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.748733 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.748849 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.749025 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.749078 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nlkz\" (UniqueName: \"kubernetes.io/projected/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-kube-api-access-8nlkz\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.755479 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.758529 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.762453 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.762505 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.776002 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nlkz\" (UniqueName: \"kubernetes.io/projected/bb7cd62a-8d8a-4f2e-a88d-2a028960477f-kube-api-access-8nlkz\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb7cd62a-8d8a-4f2e-a88d-2a028960477f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:46 crc kubenswrapper[4895]: I0129 16:32:46.891741 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:47 crc kubenswrapper[4895]: I0129 16:32:47.054494 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f" path="/var/lib/kubelet/pods/ab5ebdd2-f24f-4a52-b72c-9b0a2838e16f/volumes" Jan 29 16:32:47 crc kubenswrapper[4895]: W0129 16:32:47.411940 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb7cd62a_8d8a_4f2e_a88d_2a028960477f.slice/crio-5fc4dfdc75696b5180076d846e12591b6e05b3e9d9488d9f85b7a61cf9249f08 WatchSource:0}: Error finding container 5fc4dfdc75696b5180076d846e12591b6e05b3e9d9488d9f85b7a61cf9249f08: Status 404 returned error can't find the container with id 5fc4dfdc75696b5180076d846e12591b6e05b3e9d9488d9f85b7a61cf9249f08 Jan 29 16:32:47 crc kubenswrapper[4895]: I0129 16:32:47.414651 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:32:47 crc kubenswrapper[4895]: I0129 16:32:47.449232 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bb7cd62a-8d8a-4f2e-a88d-2a028960477f","Type":"ContainerStarted","Data":"5fc4dfdc75696b5180076d846e12591b6e05b3e9d9488d9f85b7a61cf9249f08"} Jan 29 16:32:48 crc kubenswrapper[4895]: I0129 16:32:48.466672 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bb7cd62a-8d8a-4f2e-a88d-2a028960477f","Type":"ContainerStarted","Data":"aafd0d69c298a4085925d02fab2067320c2c9d2a10637a65c4db82292b8d9347"} Jan 29 16:32:48 crc kubenswrapper[4895]: I0129 16:32:48.470944 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09c98c7-08b6-4e32-b310-d545896b1d5a","Type":"ContainerStarted","Data":"bdc54448c7c82ac9058253115fad16b0d60831418fab4ecde638024be9d7876c"} Jan 29 16:32:48 crc kubenswrapper[4895]: I0129 16:32:48.471214 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:32:48 crc kubenswrapper[4895]: I0129 16:32:48.506839 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.506815204 podStartE2EDuration="2.506815204s" podCreationTimestamp="2026-01-29 16:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:48.499593304 +0000 UTC m=+1252.302570598" watchObservedRunningTime="2026-01-29 16:32:48.506815204 +0000 UTC m=+1252.309792478" Jan 29 16:32:48 crc kubenswrapper[4895]: I0129 16:32:48.529228 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8898705919999998 podStartE2EDuration="48.529201661s" podCreationTimestamp="2026-01-29 16:32:00 +0000 UTC" firstStartedPulling="2026-01-29 16:32:01.49159358 +0000 UTC m=+1205.294570854" lastFinishedPulling="2026-01-29 16:32:48.130924649 +0000 UTC m=+1251.933901923" observedRunningTime="2026-01-29 16:32:48.521417097 +0000 UTC m=+1252.324394441" watchObservedRunningTime="2026-01-29 16:32:48.529201661 +0000 UTC m=+1252.332178955" Jan 29 16:32:48 crc kubenswrapper[4895]: I0129 16:32:48.844347 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:32:48 crc kubenswrapper[4895]: I0129 16:32:48.844462 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:32:48 crc kubenswrapper[4895]: I0129 16:32:48.846538 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:32:48 crc kubenswrapper[4895]: I0129 16:32:48.846599 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:32:48 crc kubenswrapper[4895]: I0129 16:32:48.859376 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:32:48 crc kubenswrapper[4895]: I0129 16:32:48.859491 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.109168 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-4tnv5"] Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.117063 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.155368 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-4tnv5"] Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.222595 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bppck\" (UniqueName: \"kubernetes.io/projected/2434c3d1-86c7-4c7b-b431-c799de0dadd2-kube-api-access-bppck\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.223080 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-config\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.223166 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.223379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.223471 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.325157 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bppck\" (UniqueName: \"kubernetes.io/projected/2434c3d1-86c7-4c7b-b431-c799de0dadd2-kube-api-access-bppck\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.325953 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-config\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.327707 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.327655 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-config\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.328346 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.328714 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.329385 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.329315 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.329933 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.354112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bppck\" (UniqueName: \"kubernetes.io/projected/2434c3d1-86c7-4c7b-b431-c799de0dadd2-kube-api-access-bppck\") pod \"dnsmasq-dns-68d4b6d797-4tnv5\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.456652 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:49 crc kubenswrapper[4895]: I0129 16:32:49.969389 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-4tnv5"] Jan 29 16:32:49 crc kubenswrapper[4895]: W0129 16:32:49.969856 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2434c3d1_86c7_4c7b_b431_c799de0dadd2.slice/crio-b9f2612a27de4c2ce7116a34767744d97b6c50fc2475a074b745d1ab4c2246ec WatchSource:0}: Error finding container b9f2612a27de4c2ce7116a34767744d97b6c50fc2475a074b745d1ab4c2246ec: Status 404 returned error can't find the container with id b9f2612a27de4c2ce7116a34767744d97b6c50fc2475a074b745d1ab4c2246ec Jan 29 16:32:50 crc kubenswrapper[4895]: I0129 16:32:50.497591 4895 generic.go:334] "Generic (PLEG): container finished" podID="2434c3d1-86c7-4c7b-b431-c799de0dadd2" containerID="fec41637012fd5bdb150fbbeab8acee8ed14a667c56a2c9868d4d58489f19bb8" exitCode=0 Jan 29 16:32:50 crc kubenswrapper[4895]: I0129 16:32:50.497716 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" event={"ID":"2434c3d1-86c7-4c7b-b431-c799de0dadd2","Type":"ContainerDied","Data":"fec41637012fd5bdb150fbbeab8acee8ed14a667c56a2c9868d4d58489f19bb8"} Jan 29 16:32:50 crc kubenswrapper[4895]: I0129 16:32:50.498113 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" event={"ID":"2434c3d1-86c7-4c7b-b431-c799de0dadd2","Type":"ContainerStarted","Data":"b9f2612a27de4c2ce7116a34767744d97b6c50fc2475a074b745d1ab4c2246ec"} Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.448407 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.450045 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="ceilometer-central-agent" containerID="cri-o://eb564d4bae5d4196379241f0277b03daaa388edf74e16ed1162d33fee7f15749" gracePeriod=30 Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.450132 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="sg-core" containerID="cri-o://0b3d01a70d674d61ead5c03dd02af8500213453773a49719f4b87cef0ec20d28" gracePeriod=30 Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.450153 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="ceilometer-notification-agent" containerID="cri-o://6e15d5b3b141fabcc335de298706f346763e5578a1ec83bf9b4049e4d62369b2" gracePeriod=30 Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.450223 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="proxy-httpd" containerID="cri-o://bdc54448c7c82ac9058253115fad16b0d60831418fab4ecde638024be9d7876c" gracePeriod=30 Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.529099 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" event={"ID":"2434c3d1-86c7-4c7b-b431-c799de0dadd2","Type":"ContainerStarted","Data":"878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1"} Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.529835 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.551962 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" podStartSLOduration=2.551941349 podStartE2EDuration="2.551941349s" podCreationTimestamp="2026-01-29 16:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:51.551518717 +0000 UTC m=+1255.354495981" watchObservedRunningTime="2026-01-29 16:32:51.551941349 +0000 UTC m=+1255.354918633" Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.831485 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.831777 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerName="nova-api-log" containerID="cri-o://4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440" gracePeriod=30 Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.831977 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerName="nova-api-api" containerID="cri-o://70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618" gracePeriod=30 Jan 29 16:32:51 crc kubenswrapper[4895]: I0129 16:32:51.892721 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:52 crc kubenswrapper[4895]: I0129 16:32:52.546236 4895 generic.go:334] "Generic (PLEG): container finished" podID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerID="bdc54448c7c82ac9058253115fad16b0d60831418fab4ecde638024be9d7876c" exitCode=0 Jan 29 16:32:52 crc kubenswrapper[4895]: I0129 16:32:52.546714 4895 generic.go:334] "Generic (PLEG): container finished" podID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerID="0b3d01a70d674d61ead5c03dd02af8500213453773a49719f4b87cef0ec20d28" exitCode=2 Jan 29 16:32:52 crc kubenswrapper[4895]: I0129 16:32:52.546729 4895 generic.go:334] "Generic (PLEG): container finished" podID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerID="eb564d4bae5d4196379241f0277b03daaa388edf74e16ed1162d33fee7f15749" exitCode=0 Jan 29 16:32:52 crc kubenswrapper[4895]: I0129 16:32:52.546793 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09c98c7-08b6-4e32-b310-d545896b1d5a","Type":"ContainerDied","Data":"bdc54448c7c82ac9058253115fad16b0d60831418fab4ecde638024be9d7876c"} Jan 29 16:32:52 crc kubenswrapper[4895]: I0129 16:32:52.546829 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09c98c7-08b6-4e32-b310-d545896b1d5a","Type":"ContainerDied","Data":"0b3d01a70d674d61ead5c03dd02af8500213453773a49719f4b87cef0ec20d28"} Jan 29 16:32:52 crc kubenswrapper[4895]: I0129 16:32:52.546840 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09c98c7-08b6-4e32-b310-d545896b1d5a","Type":"ContainerDied","Data":"eb564d4bae5d4196379241f0277b03daaa388edf74e16ed1162d33fee7f15749"} Jan 29 16:32:52 crc kubenswrapper[4895]: I0129 16:32:52.549904 4895 generic.go:334] "Generic (PLEG): container finished" podID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerID="4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440" exitCode=143 Jan 29 16:32:52 crc kubenswrapper[4895]: I0129 16:32:52.551080 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5f276e4-43c0-4ac6-a057-1e36cbf150d5","Type":"ContainerDied","Data":"4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440"} Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.500555 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.570472 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-combined-ca-bundle\") pod \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.570570 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-logs\") pod \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.570654 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q6wm\" (UniqueName: \"kubernetes.io/projected/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-kube-api-access-9q6wm\") pod \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.570736 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-config-data\") pod \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\" (UID: \"e5f276e4-43c0-4ac6-a057-1e36cbf150d5\") " Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.572076 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-logs" (OuterVolumeSpecName: "logs") pod "e5f276e4-43c0-4ac6-a057-1e36cbf150d5" (UID: "e5f276e4-43c0-4ac6-a057-1e36cbf150d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.589212 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-kube-api-access-9q6wm" (OuterVolumeSpecName: "kube-api-access-9q6wm") pod "e5f276e4-43c0-4ac6-a057-1e36cbf150d5" (UID: "e5f276e4-43c0-4ac6-a057-1e36cbf150d5"). InnerVolumeSpecName "kube-api-access-9q6wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.617183 4895 generic.go:334] "Generic (PLEG): container finished" podID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerID="6e15d5b3b141fabcc335de298706f346763e5578a1ec83bf9b4049e4d62369b2" exitCode=0 Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.617337 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09c98c7-08b6-4e32-b310-d545896b1d5a","Type":"ContainerDied","Data":"6e15d5b3b141fabcc335de298706f346763e5578a1ec83bf9b4049e4d62369b2"} Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.621179 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5f276e4-43c0-4ac6-a057-1e36cbf150d5" (UID: "e5f276e4-43c0-4ac6-a057-1e36cbf150d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.621103 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-config-data" (OuterVolumeSpecName: "config-data") pod "e5f276e4-43c0-4ac6-a057-1e36cbf150d5" (UID: "e5f276e4-43c0-4ac6-a057-1e36cbf150d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.623492 4895 generic.go:334] "Generic (PLEG): container finished" podID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerID="70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618" exitCode=0 Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.623541 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.623551 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5f276e4-43c0-4ac6-a057-1e36cbf150d5","Type":"ContainerDied","Data":"70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618"} Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.623589 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5f276e4-43c0-4ac6-a057-1e36cbf150d5","Type":"ContainerDied","Data":"f47044ec652b5f8aac5fdba69a5eb76c84f5c648a527b169c3afcce62585fe8a"} Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.623611 4895 scope.go:117] "RemoveContainer" containerID="70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.674486 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q6wm\" (UniqueName: \"kubernetes.io/projected/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-kube-api-access-9q6wm\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.674550 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.674569 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.674583 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f276e4-43c0-4ac6-a057-1e36cbf150d5-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.708146 4895 scope.go:117] "RemoveContainer" containerID="4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.713788 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.738906 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.750173 4895 scope.go:117] "RemoveContainer" containerID="70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618" Jan 29 16:32:55 crc kubenswrapper[4895]: E0129 16:32:55.751435 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618\": container with ID starting with 70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618 not found: ID does not exist" containerID="70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.751467 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618"} err="failed to get container status \"70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618\": rpc error: code = NotFound desc = could not find container \"70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618\": container with ID starting with 70289819dcff0eeaf938972b7e1312289b08d3651e906c8640d49027f5821618 not found: ID does not exist" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.751493 4895 scope.go:117] "RemoveContainer" containerID="4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440" Jan 29 16:32:55 crc kubenswrapper[4895]: E0129 16:32:55.754198 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440\": container with ID starting with 4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440 not found: ID does not exist" containerID="4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.754578 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440"} err="failed to get container status \"4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440\": rpc error: code = NotFound desc = could not find container \"4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440\": container with ID starting with 4fc455fb8ca4a6c49fea29cbc9b898c6104a655b756dda0a7a2307ca94020440 not found: ID does not exist" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.758003 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:55 crc kubenswrapper[4895]: E0129 16:32:55.758833 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerName="nova-api-log" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.758856 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerName="nova-api-log" Jan 29 16:32:55 crc kubenswrapper[4895]: E0129 16:32:55.758918 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerName="nova-api-api" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.758925 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerName="nova-api-api" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.759179 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerName="nova-api-log" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.759194 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" containerName="nova-api-api" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.760375 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.763412 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.763640 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.763769 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.777158 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9bmd\" (UniqueName: \"kubernetes.io/projected/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-kube-api-access-j9bmd\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.777221 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-logs\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.777318 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.777379 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-config-data\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.777418 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-internal-tls-certs\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.777434 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-public-tls-certs\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.795786 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.801481 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.879397 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-log-httpd\") pod \"e09c98c7-08b6-4e32-b310-d545896b1d5a\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.879500 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-config-data\") pod \"e09c98c7-08b6-4e32-b310-d545896b1d5a\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.879673 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-sg-core-conf-yaml\") pod \"e09c98c7-08b6-4e32-b310-d545896b1d5a\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.879817 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-combined-ca-bundle\") pod \"e09c98c7-08b6-4e32-b310-d545896b1d5a\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.879856 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-scripts\") pod \"e09c98c7-08b6-4e32-b310-d545896b1d5a\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.879944 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccgnk\" (UniqueName: \"kubernetes.io/projected/e09c98c7-08b6-4e32-b310-d545896b1d5a-kube-api-access-ccgnk\") pod \"e09c98c7-08b6-4e32-b310-d545896b1d5a\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.879982 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-run-httpd\") pod \"e09c98c7-08b6-4e32-b310-d545896b1d5a\" (UID: \"e09c98c7-08b6-4e32-b310-d545896b1d5a\") " Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.880363 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-config-data\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.880363 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e09c98c7-08b6-4e32-b310-d545896b1d5a" (UID: "e09c98c7-08b6-4e32-b310-d545896b1d5a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.880552 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-internal-tls-certs\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.880597 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-public-tls-certs\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.880986 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9bmd\" (UniqueName: \"kubernetes.io/projected/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-kube-api-access-j9bmd\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.881283 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-logs\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.881492 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.881730 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.881975 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-logs\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.885793 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e09c98c7-08b6-4e32-b310-d545896b1d5a" (UID: "e09c98c7-08b6-4e32-b310-d545896b1d5a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.885828 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-public-tls-certs\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.887419 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-config-data\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.888332 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.889200 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e09c98c7-08b6-4e32-b310-d545896b1d5a-kube-api-access-ccgnk" (OuterVolumeSpecName: "kube-api-access-ccgnk") pod "e09c98c7-08b6-4e32-b310-d545896b1d5a" (UID: "e09c98c7-08b6-4e32-b310-d545896b1d5a"). InnerVolumeSpecName "kube-api-access-ccgnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.889782 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-scripts" (OuterVolumeSpecName: "scripts") pod "e09c98c7-08b6-4e32-b310-d545896b1d5a" (UID: "e09c98c7-08b6-4e32-b310-d545896b1d5a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.891759 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-internal-tls-certs\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.901121 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9bmd\" (UniqueName: \"kubernetes.io/projected/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-kube-api-access-j9bmd\") pod \"nova-api-0\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " pod="openstack/nova-api-0" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.933402 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e09c98c7-08b6-4e32-b310-d545896b1d5a" (UID: "e09c98c7-08b6-4e32-b310-d545896b1d5a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.959019 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e09c98c7-08b6-4e32-b310-d545896b1d5a" (UID: "e09c98c7-08b6-4e32-b310-d545896b1d5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.983033 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.983069 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.983080 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccgnk\" (UniqueName: \"kubernetes.io/projected/e09c98c7-08b6-4e32-b310-d545896b1d5a-kube-api-access-ccgnk\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.983090 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09c98c7-08b6-4e32-b310-d545896b1d5a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.983102 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:55 crc kubenswrapper[4895]: I0129 16:32:55.992534 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-config-data" (OuterVolumeSpecName: "config-data") pod "e09c98c7-08b6-4e32-b310-d545896b1d5a" (UID: "e09c98c7-08b6-4e32-b310-d545896b1d5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.085322 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09c98c7-08b6-4e32-b310-d545896b1d5a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.097294 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.586409 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.662565 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09c98c7-08b6-4e32-b310-d545896b1d5a","Type":"ContainerDied","Data":"ce2721779155e17fc0a6a6b81a6e98379011720d9251b1ba1725c1d7fa29a402"} Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.662690 4895 scope.go:117] "RemoveContainer" containerID="bdc54448c7c82ac9058253115fad16b0d60831418fab4ecde638024be9d7876c" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.664082 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.669821 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64","Type":"ContainerStarted","Data":"d859fc0aafed8670520180be5765b6e4f28537d83ec2ef74f28b1a10ceeaa61a"} Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.697471 4895 scope.go:117] "RemoveContainer" containerID="0b3d01a70d674d61ead5c03dd02af8500213453773a49719f4b87cef0ec20d28" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.726810 4895 scope.go:117] "RemoveContainer" containerID="6e15d5b3b141fabcc335de298706f346763e5578a1ec83bf9b4049e4d62369b2" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.758972 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.803820 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.803918 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:32:56 crc kubenswrapper[4895]: E0129 16:32:56.804598 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="ceilometer-notification-agent" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.804621 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="ceilometer-notification-agent" Jan 29 16:32:56 crc kubenswrapper[4895]: E0129 16:32:56.804643 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="ceilometer-central-agent" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.804651 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="ceilometer-central-agent" Jan 29 16:32:56 crc kubenswrapper[4895]: E0129 16:32:56.804660 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="proxy-httpd" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.804670 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="proxy-httpd" Jan 29 16:32:56 crc kubenswrapper[4895]: E0129 16:32:56.804685 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="sg-core" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.804691 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="sg-core" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.805276 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="sg-core" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.805325 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="ceilometer-central-agent" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.805345 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="ceilometer-notification-agent" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.805360 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" containerName="proxy-httpd" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.808297 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.808476 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.809166 4895 scope.go:117] "RemoveContainer" containerID="eb564d4bae5d4196379241f0277b03daaa388edf74e16ed1162d33fee7f15749" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.815779 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.817219 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.892363 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.914074 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.914171 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.914307 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.914405 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-config-data\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.914422 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-run-httpd\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.914497 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-log-httpd\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.914687 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwvfz\" (UniqueName: \"kubernetes.io/projected/b76d8a0a-9395-4b6c-8775-efa0354ace99-kube-api-access-lwvfz\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:56 crc kubenswrapper[4895]: I0129 16:32:56.914827 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-scripts\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.035086 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-scripts\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.035184 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.035264 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.035330 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-config-data\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.035349 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-run-httpd\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.035392 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-log-httpd\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.035415 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwvfz\" (UniqueName: \"kubernetes.io/projected/b76d8a0a-9395-4b6c-8775-efa0354ace99-kube-api-access-lwvfz\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.036473 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-run-httpd\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.036966 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-log-httpd\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.038371 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.038601 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.041719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.050186 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-config-data\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.054382 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.073519 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-scripts\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.076854 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e09c98c7-08b6-4e32-b310-d545896b1d5a" path="/var/lib/kubelet/pods/e09c98c7-08b6-4e32-b310-d545896b1d5a/volumes" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.079661 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f276e4-43c0-4ac6-a057-1e36cbf150d5" path="/var/lib/kubelet/pods/e5f276e4-43c0-4ac6-a057-1e36cbf150d5/volumes" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.080735 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwvfz\" (UniqueName: \"kubernetes.io/projected/b76d8a0a-9395-4b6c-8775-efa0354ace99-kube-api-access-lwvfz\") pod \"ceilometer-0\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.127921 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:32:57 crc kubenswrapper[4895]: W0129 16:32:57.595748 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb76d8a0a_9395_4b6c_8775_efa0354ace99.slice/crio-a9c0c1347384641e76bb26099f8fd31927675168774a6c753aad92bd6e116c08 WatchSource:0}: Error finding container a9c0c1347384641e76bb26099f8fd31927675168774a6c753aad92bd6e116c08: Status 404 returned error can't find the container with id a9c0c1347384641e76bb26099f8fd31927675168774a6c753aad92bd6e116c08 Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.596485 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.686092 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b76d8a0a-9395-4b6c-8775-efa0354ace99","Type":"ContainerStarted","Data":"a9c0c1347384641e76bb26099f8fd31927675168774a6c753aad92bd6e116c08"} Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.691396 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64","Type":"ContainerStarted","Data":"d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e"} Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.691433 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64","Type":"ContainerStarted","Data":"fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8"} Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.730234 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.777124 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.777086435 podStartE2EDuration="2.777086435s" podCreationTimestamp="2026-01-29 16:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:57.722727759 +0000 UTC m=+1261.525705023" watchObservedRunningTime="2026-01-29 16:32:57.777086435 +0000 UTC m=+1261.580063729" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.823052 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.823182 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.823287 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.824462 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b61d481b9d79815e2aa0a6766b442621a7f9d5212d6a5963946c3b9463e8ef1c"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.824653 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://b61d481b9d79815e2aa0a6766b442621a7f9d5212d6a5963946c3b9463e8ef1c" gracePeriod=600 Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.943169 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-9njnh"] Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.945067 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.949685 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.949952 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 16:32:57 crc kubenswrapper[4895]: I0129 16:32:57.965169 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9njnh"] Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.060488 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-config-data\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.060595 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.060635 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk2d6\" (UniqueName: \"kubernetes.io/projected/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-kube-api-access-sk2d6\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.060665 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-scripts\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.162649 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.162736 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk2d6\" (UniqueName: \"kubernetes.io/projected/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-kube-api-access-sk2d6\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.162777 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-scripts\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.162850 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-config-data\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.171207 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-scripts\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.171284 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.171612 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-config-data\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.191580 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk2d6\" (UniqueName: \"kubernetes.io/projected/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-kube-api-access-sk2d6\") pod \"nova-cell1-cell-mapping-9njnh\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.271798 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.715719 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b76d8a0a-9395-4b6c-8775-efa0354ace99","Type":"ContainerStarted","Data":"e6776baf98e3c74dcce2ea05f6257cb02d5d52590a5c6e16ea3f24463443188c"} Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.719395 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="b61d481b9d79815e2aa0a6766b442621a7f9d5212d6a5963946c3b9463e8ef1c" exitCode=0 Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.719461 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"b61d481b9d79815e2aa0a6766b442621a7f9d5212d6a5963946c3b9463e8ef1c"} Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.719501 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"ba3dd6b954350bf38e8b9f1effc919dbdd8be56496986ff2037f29d7f2db3c91"} Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.719529 4895 scope.go:117] "RemoveContainer" containerID="56eae442f108da9a8c7cd978ba66ad557a49280ec8ee87651bc60ede37bf78eb" Jan 29 16:32:58 crc kubenswrapper[4895]: I0129 16:32:58.761784 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9njnh"] Jan 29 16:32:59 crc kubenswrapper[4895]: I0129 16:32:59.459060 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:32:59 crc kubenswrapper[4895]: I0129 16:32:59.533830 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-gk6f9"] Jan 29 16:32:59 crc kubenswrapper[4895]: I0129 16:32:59.534173 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" podUID="5726a2e0-7132-4dba-a2e2-5d19e2260f49" containerName="dnsmasq-dns" containerID="cri-o://cc8d956593f75921d938f88868311a00440f056f7e873baba313ff55a79b8f71" gracePeriod=10 Jan 29 16:32:59 crc kubenswrapper[4895]: I0129 16:32:59.740046 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b76d8a0a-9395-4b6c-8775-efa0354ace99","Type":"ContainerStarted","Data":"f62f3fbd256ef4b3dee36f77c62c6e5568b0018163aed7d27a02e379a1114749"} Jan 29 16:32:59 crc kubenswrapper[4895]: I0129 16:32:59.747552 4895 generic.go:334] "Generic (PLEG): container finished" podID="5726a2e0-7132-4dba-a2e2-5d19e2260f49" containerID="cc8d956593f75921d938f88868311a00440f056f7e873baba313ff55a79b8f71" exitCode=0 Jan 29 16:32:59 crc kubenswrapper[4895]: I0129 16:32:59.747633 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" event={"ID":"5726a2e0-7132-4dba-a2e2-5d19e2260f49","Type":"ContainerDied","Data":"cc8d956593f75921d938f88868311a00440f056f7e873baba313ff55a79b8f71"} Jan 29 16:32:59 crc kubenswrapper[4895]: I0129 16:32:59.756673 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9njnh" event={"ID":"2cf0090e-70c1-40ea-9cb0-e880c5c95c26","Type":"ContainerStarted","Data":"a83f1deb854eccbec4e80432fa04db33e0fee10c0969cf11a8d09ab0069417a7"} Jan 29 16:32:59 crc kubenswrapper[4895]: I0129 16:32:59.756731 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9njnh" event={"ID":"2cf0090e-70c1-40ea-9cb0-e880c5c95c26","Type":"ContainerStarted","Data":"7f5f10fcdcb1e8144996a655d71f4ee6d910e9914eb7aeb6663f009f8e14672c"} Jan 29 16:32:59 crc kubenswrapper[4895]: I0129 16:32:59.782327 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-9njnh" podStartSLOduration=2.782294036 podStartE2EDuration="2.782294036s" podCreationTimestamp="2026-01-29 16:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:59.776453652 +0000 UTC m=+1263.579430926" watchObservedRunningTime="2026-01-29 16:32:59.782294036 +0000 UTC m=+1263.585271300" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.012975 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.105645 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-dns-svc\") pod \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.105726 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79mr7\" (UniqueName: \"kubernetes.io/projected/5726a2e0-7132-4dba-a2e2-5d19e2260f49-kube-api-access-79mr7\") pod \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.105900 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-config\") pod \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.106060 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-nb\") pod \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.106187 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-sb\") pod \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\" (UID: \"5726a2e0-7132-4dba-a2e2-5d19e2260f49\") " Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.115061 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5726a2e0-7132-4dba-a2e2-5d19e2260f49-kube-api-access-79mr7" (OuterVolumeSpecName: "kube-api-access-79mr7") pod "5726a2e0-7132-4dba-a2e2-5d19e2260f49" (UID: "5726a2e0-7132-4dba-a2e2-5d19e2260f49"). InnerVolumeSpecName "kube-api-access-79mr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.169749 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5726a2e0-7132-4dba-a2e2-5d19e2260f49" (UID: "5726a2e0-7132-4dba-a2e2-5d19e2260f49"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.171465 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-config" (OuterVolumeSpecName: "config") pod "5726a2e0-7132-4dba-a2e2-5d19e2260f49" (UID: "5726a2e0-7132-4dba-a2e2-5d19e2260f49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.180116 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5726a2e0-7132-4dba-a2e2-5d19e2260f49" (UID: "5726a2e0-7132-4dba-a2e2-5d19e2260f49"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.187523 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5726a2e0-7132-4dba-a2e2-5d19e2260f49" (UID: "5726a2e0-7132-4dba-a2e2-5d19e2260f49"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.208530 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.208576 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.208589 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.208600 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79mr7\" (UniqueName: \"kubernetes.io/projected/5726a2e0-7132-4dba-a2e2-5d19e2260f49-kube-api-access-79mr7\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.208609 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5726a2e0-7132-4dba-a2e2-5d19e2260f49-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:00 crc kubenswrapper[4895]: E0129 16:33:00.448466 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:33:00 crc kubenswrapper[4895]: E0129 16:33:00.449192 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwvfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b76d8a0a-9395-4b6c-8775-efa0354ace99): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:00 crc kubenswrapper[4895]: E0129 16:33:00.450966 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.771229 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b76d8a0a-9395-4b6c-8775-efa0354ace99","Type":"ContainerStarted","Data":"1d7fdc0603e2a7da71090e3dc5aa4807fc75682ff8b9703284c36e36213d876a"} Jan 29 16:33:00 crc kubenswrapper[4895]: E0129 16:33:00.773984 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.774535 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" event={"ID":"5726a2e0-7132-4dba-a2e2-5d19e2260f49","Type":"ContainerDied","Data":"4df1e7bad77ac8dc6b3159ecf265fbb2df6636f0832147d6a64b1acf77b80f3b"} Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.774611 4895 scope.go:117] "RemoveContainer" containerID="cc8d956593f75921d938f88868311a00440f056f7e873baba313ff55a79b8f71" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.774619 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-gk6f9" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.804386 4895 scope.go:117] "RemoveContainer" containerID="9510b917dbaba86b05ba6290906bbc16befbb1b057872bc52f65f62ecd9b10ea" Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.875278 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-gk6f9"] Jan 29 16:33:00 crc kubenswrapper[4895]: I0129 16:33:00.886958 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-gk6f9"] Jan 29 16:33:01 crc kubenswrapper[4895]: I0129 16:33:01.051196 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5726a2e0-7132-4dba-a2e2-5d19e2260f49" path="/var/lib/kubelet/pods/5726a2e0-7132-4dba-a2e2-5d19e2260f49/volumes" Jan 29 16:33:01 crc kubenswrapper[4895]: E0129 16:33:01.789040 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:33:04 crc kubenswrapper[4895]: I0129 16:33:04.836258 4895 generic.go:334] "Generic (PLEG): container finished" podID="2cf0090e-70c1-40ea-9cb0-e880c5c95c26" containerID="a83f1deb854eccbec4e80432fa04db33e0fee10c0969cf11a8d09ab0069417a7" exitCode=0 Jan 29 16:33:04 crc kubenswrapper[4895]: I0129 16:33:04.836329 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9njnh" event={"ID":"2cf0090e-70c1-40ea-9cb0-e880c5c95c26","Type":"ContainerDied","Data":"a83f1deb854eccbec4e80432fa04db33e0fee10c0969cf11a8d09ab0069417a7"} Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.097596 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.097994 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.259245 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.389550 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-scripts\") pod \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.389961 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk2d6\" (UniqueName: \"kubernetes.io/projected/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-kube-api-access-sk2d6\") pod \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.390053 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-combined-ca-bundle\") pod \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.390168 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-config-data\") pod \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\" (UID: \"2cf0090e-70c1-40ea-9cb0-e880c5c95c26\") " Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.403410 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-kube-api-access-sk2d6" (OuterVolumeSpecName: "kube-api-access-sk2d6") pod "2cf0090e-70c1-40ea-9cb0-e880c5c95c26" (UID: "2cf0090e-70c1-40ea-9cb0-e880c5c95c26"). InnerVolumeSpecName "kube-api-access-sk2d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.414168 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-scripts" (OuterVolumeSpecName: "scripts") pod "2cf0090e-70c1-40ea-9cb0-e880c5c95c26" (UID: "2cf0090e-70c1-40ea-9cb0-e880c5c95c26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.431534 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-config-data" (OuterVolumeSpecName: "config-data") pod "2cf0090e-70c1-40ea-9cb0-e880c5c95c26" (UID: "2cf0090e-70c1-40ea-9cb0-e880c5c95c26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.437716 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cf0090e-70c1-40ea-9cb0-e880c5c95c26" (UID: "2cf0090e-70c1-40ea-9cb0-e880c5c95c26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.501308 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.501360 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.501374 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk2d6\" (UniqueName: \"kubernetes.io/projected/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-kube-api-access-sk2d6\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.501392 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf0090e-70c1-40ea-9cb0-e880c5c95c26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.877251 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9njnh" event={"ID":"2cf0090e-70c1-40ea-9cb0-e880c5c95c26","Type":"ContainerDied","Data":"7f5f10fcdcb1e8144996a655d71f4ee6d910e9914eb7aeb6663f009f8e14672c"} Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.877303 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f5f10fcdcb1e8144996a655d71f4ee6d910e9914eb7aeb6663f009f8e14672c" Jan 29 16:33:06 crc kubenswrapper[4895]: I0129 16:33:06.877408 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9njnh" Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.057199 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.057666 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerName="nova-api-log" containerID="cri-o://fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8" gracePeriod=30 Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.057691 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerName="nova-api-api" containerID="cri-o://d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e" gracePeriod=30 Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.065979 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": EOF" Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.066209 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.185:8774/\": EOF" Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.082843 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.083181 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7d3b4c1a-b5fa-4816-9653-2e7841f39dce" containerName="nova-scheduler-scheduler" containerID="cri-o://f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027" gracePeriod=30 Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.151172 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.151411 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-log" containerID="cri-o://d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c" gracePeriod=30 Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.151950 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-metadata" containerID="cri-o://4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721" gracePeriod=30 Jan 29 16:33:07 crc kubenswrapper[4895]: E0129 16:33:07.521248 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:33:07 crc kubenswrapper[4895]: E0129 16:33:07.522807 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:33:07 crc kubenswrapper[4895]: E0129 16:33:07.524241 4895 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:33:07 crc kubenswrapper[4895]: E0129 16:33:07.524291 4895 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7d3b4c1a-b5fa-4816-9653-2e7841f39dce" containerName="nova-scheduler-scheduler" Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.893736 4895 generic.go:334] "Generic (PLEG): container finished" podID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerID="d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c" exitCode=143 Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.893807 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b879ebd-b686-4535-aa46-94baaa9c0ae7","Type":"ContainerDied","Data":"d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c"} Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.896143 4895 generic.go:334] "Generic (PLEG): container finished" podID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerID="fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8" exitCode=143 Jan 29 16:33:07 crc kubenswrapper[4895]: I0129 16:33:07.896179 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64","Type":"ContainerDied","Data":"fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8"} Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.307272 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": read tcp 10.217.0.2:39328->10.217.0.179:8775: read: connection reset by peer" Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.307342 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": read tcp 10.217.0.2:39332->10.217.0.179:8775: read: connection reset by peer" Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.840193 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.954967 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b879ebd-b686-4535-aa46-94baaa9c0ae7-logs\") pod \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.955064 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-nova-metadata-tls-certs\") pod \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.955477 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tqpl\" (UniqueName: \"kubernetes.io/projected/7b879ebd-b686-4535-aa46-94baaa9c0ae7-kube-api-access-6tqpl\") pod \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.955580 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-config-data\") pod \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.955618 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-combined-ca-bundle\") pod \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\" (UID: \"7b879ebd-b686-4535-aa46-94baaa9c0ae7\") " Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.956058 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b879ebd-b686-4535-aa46-94baaa9c0ae7-logs" (OuterVolumeSpecName: "logs") pod "7b879ebd-b686-4535-aa46-94baaa9c0ae7" (UID: "7b879ebd-b686-4535-aa46-94baaa9c0ae7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.964894 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b879ebd-b686-4535-aa46-94baaa9c0ae7-kube-api-access-6tqpl" (OuterVolumeSpecName: "kube-api-access-6tqpl") pod "7b879ebd-b686-4535-aa46-94baaa9c0ae7" (UID: "7b879ebd-b686-4535-aa46-94baaa9c0ae7"). InnerVolumeSpecName "kube-api-access-6tqpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.985979 4895 generic.go:334] "Generic (PLEG): container finished" podID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerID="4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721" exitCode=0 Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.986069 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b879ebd-b686-4535-aa46-94baaa9c0ae7","Type":"ContainerDied","Data":"4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721"} Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.986116 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b879ebd-b686-4535-aa46-94baaa9c0ae7","Type":"ContainerDied","Data":"13773b8c01dae1f9afb9637e7b9a7a163473a12f1b8988c1086ea2279e081e11"} Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.986145 4895 scope.go:117] "RemoveContainer" containerID="4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721" Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.986366 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.988738 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b879ebd-b686-4535-aa46-94baaa9c0ae7" (UID: "7b879ebd-b686-4535-aa46-94baaa9c0ae7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:10 crc kubenswrapper[4895]: I0129 16:33:10.999284 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-config-data" (OuterVolumeSpecName: "config-data") pod "7b879ebd-b686-4535-aa46-94baaa9c0ae7" (UID: "7b879ebd-b686-4535-aa46-94baaa9c0ae7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.030444 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7b879ebd-b686-4535-aa46-94baaa9c0ae7" (UID: "7b879ebd-b686-4535-aa46-94baaa9c0ae7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.060396 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b879ebd-b686-4535-aa46-94baaa9c0ae7-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.060593 4895 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.060653 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tqpl\" (UniqueName: \"kubernetes.io/projected/7b879ebd-b686-4535-aa46-94baaa9c0ae7-kube-api-access-6tqpl\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.060743 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.060801 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b879ebd-b686-4535-aa46-94baaa9c0ae7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.078739 4895 scope.go:117] "RemoveContainer" containerID="d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.105039 4895 scope.go:117] "RemoveContainer" containerID="4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721" Jan 29 16:33:11 crc kubenswrapper[4895]: E0129 16:33:11.107920 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721\": container with ID starting with 4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721 not found: ID does not exist" containerID="4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.107954 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721"} err="failed to get container status \"4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721\": rpc error: code = NotFound desc = could not find container \"4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721\": container with ID starting with 4fb962fead2ea5b353630c23a39bb109b2e6a0f0d1aa3610af3e24a6d7a82721 not found: ID does not exist" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.107986 4895 scope.go:117] "RemoveContainer" containerID="d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c" Jan 29 16:33:11 crc kubenswrapper[4895]: E0129 16:33:11.108461 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c\": container with ID starting with d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c not found: ID does not exist" containerID="d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.108486 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c"} err="failed to get container status \"d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c\": rpc error: code = NotFound desc = could not find container \"d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c\": container with ID starting with d1109108608f3e0364b0b0a020cb3978a9e696756f3c2ee223f740aef8dd039c not found: ID does not exist" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.334049 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.343451 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.358054 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:33:11 crc kubenswrapper[4895]: E0129 16:33:11.358795 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-log" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.358828 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-log" Jan 29 16:33:11 crc kubenswrapper[4895]: E0129 16:33:11.358858 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5726a2e0-7132-4dba-a2e2-5d19e2260f49" containerName="dnsmasq-dns" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.358899 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5726a2e0-7132-4dba-a2e2-5d19e2260f49" containerName="dnsmasq-dns" Jan 29 16:33:11 crc kubenswrapper[4895]: E0129 16:33:11.358929 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-metadata" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.358948 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-metadata" Jan 29 16:33:11 crc kubenswrapper[4895]: E0129 16:33:11.358975 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5726a2e0-7132-4dba-a2e2-5d19e2260f49" containerName="init" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.358990 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5726a2e0-7132-4dba-a2e2-5d19e2260f49" containerName="init" Jan 29 16:33:11 crc kubenswrapper[4895]: E0129 16:33:11.359028 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf0090e-70c1-40ea-9cb0-e880c5c95c26" containerName="nova-manage" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.359040 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf0090e-70c1-40ea-9cb0-e880c5c95c26" containerName="nova-manage" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.359383 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5726a2e0-7132-4dba-a2e2-5d19e2260f49" containerName="dnsmasq-dns" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.359434 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-metadata" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.359457 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf0090e-70c1-40ea-9cb0-e880c5c95c26" containerName="nova-manage" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.359489 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" containerName="nova-metadata-log" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.361480 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.364453 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.364546 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.389506 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.390371 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.390447 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-logs\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.390563 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-config-data\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.390592 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc94f\" (UniqueName: \"kubernetes.io/projected/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-kube-api-access-vc94f\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.390683 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.491337 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.491438 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.491460 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-logs\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.491514 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-config-data\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.491530 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc94f\" (UniqueName: \"kubernetes.io/projected/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-kube-api-access-vc94f\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.492503 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-logs\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.500102 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.501123 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-config-data\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.502800 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.518203 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc94f\" (UniqueName: \"kubernetes.io/projected/ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7-kube-api-access-vc94f\") pod \"nova-metadata-0\" (UID: \"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7\") " pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.740150 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.869316 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.999757 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-combined-ca-bundle\") pod \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.999824 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gvhz\" (UniqueName: \"kubernetes.io/projected/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-kube-api-access-5gvhz\") pod \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " Jan 29 16:33:11 crc kubenswrapper[4895]: I0129 16:33:11.999922 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-config-data\") pod \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\" (UID: \"7d3b4c1a-b5fa-4816-9653-2e7841f39dce\") " Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.009774 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-kube-api-access-5gvhz" (OuterVolumeSpecName: "kube-api-access-5gvhz") pod "7d3b4c1a-b5fa-4816-9653-2e7841f39dce" (UID: "7d3b4c1a-b5fa-4816-9653-2e7841f39dce"). InnerVolumeSpecName "kube-api-access-5gvhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.010491 4895 generic.go:334] "Generic (PLEG): container finished" podID="7d3b4c1a-b5fa-4816-9653-2e7841f39dce" containerID="f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027" exitCode=0 Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.010558 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7d3b4c1a-b5fa-4816-9653-2e7841f39dce","Type":"ContainerDied","Data":"f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027"} Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.010605 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7d3b4c1a-b5fa-4816-9653-2e7841f39dce","Type":"ContainerDied","Data":"b4aae2a56819fc343de501859958dfd422e8764efec03f01f93d41070f1c17e1"} Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.010629 4895 scope.go:117] "RemoveContainer" containerID="f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.010807 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.036135 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d3b4c1a-b5fa-4816-9653-2e7841f39dce" (UID: "7d3b4c1a-b5fa-4816-9653-2e7841f39dce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.037994 4895 scope.go:117] "RemoveContainer" containerID="f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027" Jan 29 16:33:12 crc kubenswrapper[4895]: E0129 16:33:12.038550 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027\": container with ID starting with f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027 not found: ID does not exist" containerID="f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.038586 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027"} err="failed to get container status \"f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027\": rpc error: code = NotFound desc = could not find container \"f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027\": container with ID starting with f4a65d01d0ec5c12afed717d7f435db3ee4c1b1763ba1b6190a5f47004955027 not found: ID does not exist" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.043795 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-config-data" (OuterVolumeSpecName: "config-data") pod "7d3b4c1a-b5fa-4816-9653-2e7841f39dce" (UID: "7d3b4c1a-b5fa-4816-9653-2e7841f39dce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.104361 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.104406 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.104419 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gvhz\" (UniqueName: \"kubernetes.io/projected/7d3b4c1a-b5fa-4816-9653-2e7841f39dce-kube-api-access-5gvhz\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:12 crc kubenswrapper[4895]: W0129 16:33:12.249653 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac1eaec6_ae6c_40bd_b790_7e1fe9c8f0f7.slice/crio-24a6f542ab14752adaa788b96288f7dc305d9bb35118abb4d419ccdd9c1bfe7b WatchSource:0}: Error finding container 24a6f542ab14752adaa788b96288f7dc305d9bb35118abb4d419ccdd9c1bfe7b: Status 404 returned error can't find the container with id 24a6f542ab14752adaa788b96288f7dc305d9bb35118abb4d419ccdd9c1bfe7b Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.250112 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.376541 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.395739 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.419059 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:33:12 crc kubenswrapper[4895]: E0129 16:33:12.419848 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3b4c1a-b5fa-4816-9653-2e7841f39dce" containerName="nova-scheduler-scheduler" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.419904 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3b4c1a-b5fa-4816-9653-2e7841f39dce" containerName="nova-scheduler-scheduler" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.420227 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3b4c1a-b5fa-4816-9653-2e7841f39dce" containerName="nova-scheduler-scheduler" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.421415 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.427881 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.435100 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.515535 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1-config-data\") pod \"nova-scheduler-0\" (UID: \"7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1\") " pod="openstack/nova-scheduler-0" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.516004 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbb99\" (UniqueName: \"kubernetes.io/projected/7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1-kube-api-access-fbb99\") pod \"nova-scheduler-0\" (UID: \"7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1\") " pod="openstack/nova-scheduler-0" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.516082 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1\") " pod="openstack/nova-scheduler-0" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.618536 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1-config-data\") pod \"nova-scheduler-0\" (UID: \"7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1\") " pod="openstack/nova-scheduler-0" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.618745 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbb99\" (UniqueName: \"kubernetes.io/projected/7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1-kube-api-access-fbb99\") pod \"nova-scheduler-0\" (UID: \"7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1\") " pod="openstack/nova-scheduler-0" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.618779 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1\") " pod="openstack/nova-scheduler-0" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.625333 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1\") " pod="openstack/nova-scheduler-0" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.633393 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1-config-data\") pod \"nova-scheduler-0\" (UID: \"7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1\") " pod="openstack/nova-scheduler-0" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.646248 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbb99\" (UniqueName: \"kubernetes.io/projected/7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1-kube-api-access-fbb99\") pod \"nova-scheduler-0\" (UID: \"7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1\") " pod="openstack/nova-scheduler-0" Jan 29 16:33:12 crc kubenswrapper[4895]: I0129 16:33:12.763181 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.011154 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.031421 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-public-tls-certs\") pod \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.031512 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-combined-ca-bundle\") pod \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.031559 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-logs\") pod \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.031669 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9bmd\" (UniqueName: \"kubernetes.io/projected/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-kube-api-access-j9bmd\") pod \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.031724 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-config-data\") pod \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.031746 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-internal-tls-certs\") pod \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\" (UID: \"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64\") " Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.036530 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-logs" (OuterVolumeSpecName: "logs") pod "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" (UID: "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.056448 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-kube-api-access-j9bmd" (OuterVolumeSpecName: "kube-api-access-j9bmd") pod "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" (UID: "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64"). InnerVolumeSpecName "kube-api-access-j9bmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.057637 4895 generic.go:334] "Generic (PLEG): container finished" podID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerID="d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e" exitCode=0 Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.057936 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.071570 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" (UID: "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.071617 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b879ebd-b686-4535-aa46-94baaa9c0ae7" path="/var/lib/kubelet/pods/7b879ebd-b686-4535-aa46-94baaa9c0ae7/volumes" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.081433 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3b4c1a-b5fa-4816-9653-2e7841f39dce" path="/var/lib/kubelet/pods/7d3b4c1a-b5fa-4816-9653-2e7841f39dce/volumes" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.082144 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-config-data" (OuterVolumeSpecName: "config-data") pod "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" (UID: "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.086769 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64","Type":"ContainerDied","Data":"d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e"} Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.086841 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64","Type":"ContainerDied","Data":"d859fc0aafed8670520180be5765b6e4f28537d83ec2ef74f28b1a10ceeaa61a"} Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.086857 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7","Type":"ContainerStarted","Data":"31829078c4504fa34ba3aa74a872e57641a38bb8449b09e8410767e79d16e0d6"} Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.086899 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7","Type":"ContainerStarted","Data":"e621c744ec6fd4c0ee22a6781c6494dba4ffbff640c8493f5ee21716b8aa015c"} Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.086909 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7","Type":"ContainerStarted","Data":"24a6f542ab14752adaa788b96288f7dc305d9bb35118abb4d419ccdd9c1bfe7b"} Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.086943 4895 scope.go:117] "RemoveContainer" containerID="d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.107029 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" (UID: "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.112715 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.112678931 podStartE2EDuration="2.112678931s" podCreationTimestamp="2026-01-29 16:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:33:13.097579675 +0000 UTC m=+1276.900556969" watchObservedRunningTime="2026-01-29 16:33:13.112678931 +0000 UTC m=+1276.915656205" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.112970 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" (UID: "06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.126990 4895 scope.go:117] "RemoveContainer" containerID="fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.134984 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.135029 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.135044 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.135058 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9bmd\" (UniqueName: \"kubernetes.io/projected/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-kube-api-access-j9bmd\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.135073 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.135097 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.159965 4895 scope.go:117] "RemoveContainer" containerID="d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e" Jan 29 16:33:13 crc kubenswrapper[4895]: E0129 16:33:13.160462 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e\": container with ID starting with d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e not found: ID does not exist" containerID="d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.160501 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e"} err="failed to get container status \"d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e\": rpc error: code = NotFound desc = could not find container \"d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e\": container with ID starting with d79f5d5851a8db0e1a51ca5c2264750a92ee2fbf19a7a6d05b1833f09b00d78e not found: ID does not exist" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.160536 4895 scope.go:117] "RemoveContainer" containerID="fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8" Jan 29 16:33:13 crc kubenswrapper[4895]: E0129 16:33:13.160901 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8\": container with ID starting with fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8 not found: ID does not exist" containerID="fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.160936 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8"} err="failed to get container status \"fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8\": rpc error: code = NotFound desc = could not find container \"fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8\": container with ID starting with fa32699e2e5e586284b877a55f9b3bfba10b37d65d82f4f9ff549353ff6b11c8 not found: ID does not exist" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.322743 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:33:13 crc kubenswrapper[4895]: W0129 16:33:13.323519 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b8e9bd4_5ccc_4cad_84c4_be15b2c180b1.slice/crio-1f2dfac7364dab9ebd0f360a6df0ed49defa5d30486324b3d90bcf46576ddd3f WatchSource:0}: Error finding container 1f2dfac7364dab9ebd0f360a6df0ed49defa5d30486324b3d90bcf46576ddd3f: Status 404 returned error can't find the container with id 1f2dfac7364dab9ebd0f360a6df0ed49defa5d30486324b3d90bcf46576ddd3f Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.546171 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.557278 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.583475 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:33:13 crc kubenswrapper[4895]: E0129 16:33:13.587405 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerName="nova-api-api" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.587447 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerName="nova-api-api" Jan 29 16:33:13 crc kubenswrapper[4895]: E0129 16:33:13.587480 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerName="nova-api-log" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.587490 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerName="nova-api-log" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.587817 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerName="nova-api-api" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.587843 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" containerName="nova-api-log" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.589175 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.591629 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.591881 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.593315 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.612009 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.651919 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.653134 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-config-data\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.653671 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-public-tls-certs\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.654070 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7cp\" (UniqueName: \"kubernetes.io/projected/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-kube-api-access-5b7cp\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.654340 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.655483 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-logs\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.758433 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b7cp\" (UniqueName: \"kubernetes.io/projected/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-kube-api-access-5b7cp\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.758486 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.758555 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-logs\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.758637 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.758668 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-config-data\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.758694 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-public-tls-certs\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.759242 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-logs\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.765758 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-public-tls-certs\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.765842 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.765847 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-config-data\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.767112 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.780528 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b7cp\" (UniqueName: \"kubernetes.io/projected/1362e7b3-cc8c-4a47-a93f-f5e98cce6acd-kube-api-access-5b7cp\") pod \"nova-api-0\" (UID: \"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd\") " pod="openstack/nova-api-0" Jan 29 16:33:13 crc kubenswrapper[4895]: I0129 16:33:13.924517 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:33:14 crc kubenswrapper[4895]: I0129 16:33:14.140257 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1","Type":"ContainerStarted","Data":"f11cb7f141d1cf7570b37926bd6c6ecdbbaa9131155ef10c167261be6140c31d"} Jan 29 16:33:14 crc kubenswrapper[4895]: I0129 16:33:14.140368 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1","Type":"ContainerStarted","Data":"1f2dfac7364dab9ebd0f360a6df0ed49defa5d30486324b3d90bcf46576ddd3f"} Jan 29 16:33:14 crc kubenswrapper[4895]: I0129 16:33:14.180151 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.180124326 podStartE2EDuration="2.180124326s" podCreationTimestamp="2026-01-29 16:33:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:33:14.155754737 +0000 UTC m=+1277.958732021" watchObservedRunningTime="2026-01-29 16:33:14.180124326 +0000 UTC m=+1277.983101590" Jan 29 16:33:14 crc kubenswrapper[4895]: W0129 16:33:14.478754 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1362e7b3_cc8c_4a47_a93f_f5e98cce6acd.slice/crio-5e54b76e570f3b12c08340c277c843f71fb681954998bf84e8789389d341e428 WatchSource:0}: Error finding container 5e54b76e570f3b12c08340c277c843f71fb681954998bf84e8789389d341e428: Status 404 returned error can't find the container with id 5e54b76e570f3b12c08340c277c843f71fb681954998bf84e8789389d341e428 Jan 29 16:33:14 crc kubenswrapper[4895]: I0129 16:33:14.480427 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:33:15 crc kubenswrapper[4895]: I0129 16:33:15.057916 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64" path="/var/lib/kubelet/pods/06ac1e9c-4331-4bc4-95c8-0bcda6c2dc64/volumes" Jan 29 16:33:15 crc kubenswrapper[4895]: I0129 16:33:15.160438 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd","Type":"ContainerStarted","Data":"baf04604f352cc7da47da58132ebfb5b2e4715a1df333d74af0de62d0e677249"} Jan 29 16:33:15 crc kubenswrapper[4895]: I0129 16:33:15.160516 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd","Type":"ContainerStarted","Data":"dda712dfd1c71397f930dcfbebcf79afd8cc566220eacb69a75ebb39db539659"} Jan 29 16:33:15 crc kubenswrapper[4895]: I0129 16:33:15.160537 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1362e7b3-cc8c-4a47-a93f-f5e98cce6acd","Type":"ContainerStarted","Data":"5e54b76e570f3b12c08340c277c843f71fb681954998bf84e8789389d341e428"} Jan 29 16:33:15 crc kubenswrapper[4895]: I0129 16:33:15.219897 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.219837654 podStartE2EDuration="2.219837654s" podCreationTimestamp="2026-01-29 16:33:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:33:15.188981275 +0000 UTC m=+1278.991958549" watchObservedRunningTime="2026-01-29 16:33:15.219837654 +0000 UTC m=+1279.022814968" Jan 29 16:33:16 crc kubenswrapper[4895]: E0129 16:33:16.170592 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:33:16 crc kubenswrapper[4895]: E0129 16:33:16.171451 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwvfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b76d8a0a-9395-4b6c-8775-efa0354ace99): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:16 crc kubenswrapper[4895]: E0129 16:33:16.172816 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:33:16 crc kubenswrapper[4895]: I0129 16:33:16.741183 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:33:16 crc kubenswrapper[4895]: I0129 16:33:16.742644 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:33:17 crc kubenswrapper[4895]: I0129 16:33:17.764164 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 16:33:21 crc kubenswrapper[4895]: I0129 16:33:21.741468 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:33:21 crc kubenswrapper[4895]: I0129 16:33:21.742330 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:33:22 crc kubenswrapper[4895]: I0129 16:33:22.762167 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:33:22 crc kubenswrapper[4895]: I0129 16:33:22.763598 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:33:22 crc kubenswrapper[4895]: I0129 16:33:22.764345 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 16:33:22 crc kubenswrapper[4895]: I0129 16:33:22.809143 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 16:33:23 crc kubenswrapper[4895]: I0129 16:33:23.329348 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 16:33:23 crc kubenswrapper[4895]: I0129 16:33:23.925528 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:33:23 crc kubenswrapper[4895]: I0129 16:33:23.926019 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:33:24 crc kubenswrapper[4895]: I0129 16:33:24.942263 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1362e7b3-cc8c-4a47-a93f-f5e98cce6acd" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:33:24 crc kubenswrapper[4895]: I0129 16:33:24.942269 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1362e7b3-cc8c-4a47-a93f-f5e98cce6acd" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.190:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:33:29 crc kubenswrapper[4895]: E0129 16:33:29.042044 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:33:31 crc kubenswrapper[4895]: I0129 16:33:31.751532 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:33:31 crc kubenswrapper[4895]: I0129 16:33:31.752387 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:33:31 crc kubenswrapper[4895]: I0129 16:33:31.759280 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:33:31 crc kubenswrapper[4895]: I0129 16:33:31.770413 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:33:33 crc kubenswrapper[4895]: I0129 16:33:33.937061 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:33:33 crc kubenswrapper[4895]: I0129 16:33:33.938121 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:33:33 crc kubenswrapper[4895]: I0129 16:33:33.938204 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:33:33 crc kubenswrapper[4895]: I0129 16:33:33.962213 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:33:34 crc kubenswrapper[4895]: I0129 16:33:34.427446 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:33:34 crc kubenswrapper[4895]: I0129 16:33:34.435492 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:33:41 crc kubenswrapper[4895]: E0129 16:33:41.164823 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:33:41 crc kubenswrapper[4895]: E0129 16:33:41.165745 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwvfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b76d8a0a-9395-4b6c-8775-efa0354ace99): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:41 crc kubenswrapper[4895]: E0129 16:33:41.167617 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:33:57 crc kubenswrapper[4895]: E0129 16:33:57.059323 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:34:08 crc kubenswrapper[4895]: E0129 16:34:08.040144 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:34:21 crc kubenswrapper[4895]: E0129 16:34:21.042983 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:34:36 crc kubenswrapper[4895]: E0129 16:34:36.168663 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:34:36 crc kubenswrapper[4895]: E0129 16:34:36.169695 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwvfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b76d8a0a-9395-4b6c-8775-efa0354ace99): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:36 crc kubenswrapper[4895]: E0129 16:34:36.171447 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:34:51 crc kubenswrapper[4895]: E0129 16:34:51.040832 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:34:58 crc kubenswrapper[4895]: I0129 16:34:58.362045 4895 scope.go:117] "RemoveContainer" containerID="0a5d4da8fbbdd1f39d5fe27edfe64915003d001497146eb73f216861e8b9faa6" Jan 29 16:34:58 crc kubenswrapper[4895]: I0129 16:34:58.413932 4895 scope.go:117] "RemoveContainer" containerID="aa4f271dba09a2a5626c37ee9688bb9605e2bb4d346d2a7baeb8e10905204595" Jan 29 16:35:04 crc kubenswrapper[4895]: E0129 16:35:04.045659 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:35:15 crc kubenswrapper[4895]: E0129 16:35:15.043364 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:35:26 crc kubenswrapper[4895]: E0129 16:35:26.040159 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:35:27 crc kubenswrapper[4895]: I0129 16:35:27.823634 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:35:27 crc kubenswrapper[4895]: I0129 16:35:27.823696 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:35:41 crc kubenswrapper[4895]: E0129 16:35:41.040789 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:35:53 crc kubenswrapper[4895]: E0129 16:35:53.040114 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:35:57 crc kubenswrapper[4895]: I0129 16:35:57.824066 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:35:57 crc kubenswrapper[4895]: I0129 16:35:57.825033 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:35:58 crc kubenswrapper[4895]: I0129 16:35:58.505761 4895 scope.go:117] "RemoveContainer" containerID="6f17eb14814b641336430b9c823010992532c8d8ec2a82555d0e7ba8aafc5d1d" Jan 29 16:35:58 crc kubenswrapper[4895]: I0129 16:35:58.567730 4895 scope.go:117] "RemoveContainer" containerID="c78dfff09608dde8797049a8ecea1a112c629c4a8d9e43970b4e3654e7d807ca" Jan 29 16:35:58 crc kubenswrapper[4895]: I0129 16:35:58.604613 4895 scope.go:117] "RemoveContainer" containerID="e315919af079baffbb4fee903cce32f2c976ec3eb34c50a727adf60baa77403e" Jan 29 16:36:07 crc kubenswrapper[4895]: E0129 16:36:07.178108 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 29 16:36:07 crc kubenswrapper[4895]: E0129 16:36:07.179448 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lwvfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b76d8a0a-9395-4b6c-8775-efa0354ace99): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:36:07 crc kubenswrapper[4895]: E0129 16:36:07.180755 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:36:21 crc kubenswrapper[4895]: E0129 16:36:21.043078 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:36:27 crc kubenswrapper[4895]: I0129 16:36:27.823846 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:36:27 crc kubenswrapper[4895]: I0129 16:36:27.824854 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:36:27 crc kubenswrapper[4895]: I0129 16:36:27.824973 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:36:27 crc kubenswrapper[4895]: I0129 16:36:27.826392 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba3dd6b954350bf38e8b9f1effc919dbdd8be56496986ff2037f29d7f2db3c91"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:36:27 crc kubenswrapper[4895]: I0129 16:36:27.826503 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://ba3dd6b954350bf38e8b9f1effc919dbdd8be56496986ff2037f29d7f2db3c91" gracePeriod=600 Jan 29 16:36:28 crc kubenswrapper[4895]: I0129 16:36:28.743408 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="ba3dd6b954350bf38e8b9f1effc919dbdd8be56496986ff2037f29d7f2db3c91" exitCode=0 Jan 29 16:36:28 crc kubenswrapper[4895]: I0129 16:36:28.743491 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"ba3dd6b954350bf38e8b9f1effc919dbdd8be56496986ff2037f29d7f2db3c91"} Jan 29 16:36:28 crc kubenswrapper[4895]: I0129 16:36:28.744064 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230"} Jan 29 16:36:28 crc kubenswrapper[4895]: I0129 16:36:28.744113 4895 scope.go:117] "RemoveContainer" containerID="b61d481b9d79815e2aa0a6766b442621a7f9d5212d6a5963946c3b9463e8ef1c" Jan 29 16:36:32 crc kubenswrapper[4895]: E0129 16:36:32.041080 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:36:47 crc kubenswrapper[4895]: E0129 16:36:47.050198 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:36:58 crc kubenswrapper[4895]: I0129 16:36:58.750210 4895 scope.go:117] "RemoveContainer" containerID="478f57d68d603d5960de088d411dfe81cf91c5385c7de65269aadce03918b405" Jan 29 16:36:58 crc kubenswrapper[4895]: I0129 16:36:58.786390 4895 scope.go:117] "RemoveContainer" containerID="91d8b1587907bdbf1ca3d4f51bdba90d10f544afb556a62df472395c9dc6d69b" Jan 29 16:37:00 crc kubenswrapper[4895]: E0129 16:37:00.039054 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.605572 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-92qm2"] Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.614758 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.642511 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-92qm2"] Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.660352 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-catalog-content\") pod \"redhat-operators-92qm2\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.660466 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-utilities\") pod \"redhat-operators-92qm2\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.660500 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv594\" (UniqueName: \"kubernetes.io/projected/e1a5f19e-38ec-4e07-91f9-37abeda873f5-kube-api-access-xv594\") pod \"redhat-operators-92qm2\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.762267 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-utilities\") pod \"redhat-operators-92qm2\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.762358 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv594\" (UniqueName: \"kubernetes.io/projected/e1a5f19e-38ec-4e07-91f9-37abeda873f5-kube-api-access-xv594\") pod \"redhat-operators-92qm2\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.762562 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-catalog-content\") pod \"redhat-operators-92qm2\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.763212 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-utilities\") pod \"redhat-operators-92qm2\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.763227 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-catalog-content\") pod \"redhat-operators-92qm2\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.793472 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv594\" (UniqueName: \"kubernetes.io/projected/e1a5f19e-38ec-4e07-91f9-37abeda873f5-kube-api-access-xv594\") pod \"redhat-operators-92qm2\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:37:01 crc kubenswrapper[4895]: I0129 16:37:01.956282 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:37:02 crc kubenswrapper[4895]: I0129 16:37:02.450947 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-92qm2"] Jan 29 16:37:03 crc kubenswrapper[4895]: I0129 16:37:03.157392 4895 generic.go:334] "Generic (PLEG): container finished" podID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerID="62152aa747c01047703c5948c94613e4939314c52b73363f5ad3b127abb8691d" exitCode=0 Jan 29 16:37:03 crc kubenswrapper[4895]: I0129 16:37:03.157456 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92qm2" event={"ID":"e1a5f19e-38ec-4e07-91f9-37abeda873f5","Type":"ContainerDied","Data":"62152aa747c01047703c5948c94613e4939314c52b73363f5ad3b127abb8691d"} Jan 29 16:37:03 crc kubenswrapper[4895]: I0129 16:37:03.158039 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92qm2" event={"ID":"e1a5f19e-38ec-4e07-91f9-37abeda873f5","Type":"ContainerStarted","Data":"53fa52ac36ebab240d35a614ec5ea5a5bd1033ed9a57708b8b207dbfe9489db1"} Jan 29 16:37:03 crc kubenswrapper[4895]: I0129 16:37:03.161416 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:37:03 crc kubenswrapper[4895]: E0129 16:37:03.294177 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:37:03 crc kubenswrapper[4895]: E0129 16:37:03.294393 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv594,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-92qm2_openshift-marketplace(e1a5f19e-38ec-4e07-91f9-37abeda873f5): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:37:03 crc kubenswrapper[4895]: E0129 16:37:03.295971 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-92qm2" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" Jan 29 16:37:04 crc kubenswrapper[4895]: E0129 16:37:04.169597 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-92qm2" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" Jan 29 16:37:14 crc kubenswrapper[4895]: E0129 16:37:14.040753 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:37:18 crc kubenswrapper[4895]: I0129 16:37:18.850138 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lcblh"] Jan 29 16:37:18 crc kubenswrapper[4895]: I0129 16:37:18.856032 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:37:18 crc kubenswrapper[4895]: I0129 16:37:18.865920 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lcblh"] Jan 29 16:37:19 crc kubenswrapper[4895]: I0129 16:37:19.052803 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx9pm\" (UniqueName: \"kubernetes.io/projected/d68925c8-d035-4236-a95f-4616aa90d818-kube-api-access-bx9pm\") pod \"certified-operators-lcblh\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:37:19 crc kubenswrapper[4895]: I0129 16:37:19.053378 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-utilities\") pod \"certified-operators-lcblh\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:37:19 crc kubenswrapper[4895]: I0129 16:37:19.053486 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-catalog-content\") pod \"certified-operators-lcblh\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:37:19 crc kubenswrapper[4895]: I0129 16:37:19.156050 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx9pm\" (UniqueName: \"kubernetes.io/projected/d68925c8-d035-4236-a95f-4616aa90d818-kube-api-access-bx9pm\") pod \"certified-operators-lcblh\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:37:19 crc kubenswrapper[4895]: I0129 16:37:19.157201 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-utilities\") pod \"certified-operators-lcblh\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:37:19 crc kubenswrapper[4895]: I0129 16:37:19.157564 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-catalog-content\") pod \"certified-operators-lcblh\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:37:19 crc kubenswrapper[4895]: I0129 16:37:19.158046 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-utilities\") pod \"certified-operators-lcblh\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:37:19 crc kubenswrapper[4895]: I0129 16:37:19.158191 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-catalog-content\") pod \"certified-operators-lcblh\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:37:19 crc kubenswrapper[4895]: E0129 16:37:19.168451 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:37:19 crc kubenswrapper[4895]: E0129 16:37:19.168661 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv594,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-92qm2_openshift-marketplace(e1a5f19e-38ec-4e07-91f9-37abeda873f5): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:37:19 crc kubenswrapper[4895]: E0129 16:37:19.170100 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-92qm2" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" Jan 29 16:37:19 crc kubenswrapper[4895]: I0129 16:37:19.190426 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx9pm\" (UniqueName: \"kubernetes.io/projected/d68925c8-d035-4236-a95f-4616aa90d818-kube-api-access-bx9pm\") pod \"certified-operators-lcblh\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:37:19 crc kubenswrapper[4895]: I0129 16:37:19.482837 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:37:20 crc kubenswrapper[4895]: I0129 16:37:20.046152 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lcblh"] Jan 29 16:37:20 crc kubenswrapper[4895]: I0129 16:37:20.332965 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcblh" event={"ID":"d68925c8-d035-4236-a95f-4616aa90d818","Type":"ContainerStarted","Data":"aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547"} Jan 29 16:37:20 crc kubenswrapper[4895]: I0129 16:37:20.333310 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcblh" event={"ID":"d68925c8-d035-4236-a95f-4616aa90d818","Type":"ContainerStarted","Data":"8a5c1be590d699696c80fa253a651ae658f8d7e770cd85cfecd796ada65f39b1"} Jan 29 16:37:21 crc kubenswrapper[4895]: I0129 16:37:21.345427 4895 generic.go:334] "Generic (PLEG): container finished" podID="d68925c8-d035-4236-a95f-4616aa90d818" containerID="aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547" exitCode=0 Jan 29 16:37:21 crc kubenswrapper[4895]: I0129 16:37:21.345508 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcblh" event={"ID":"d68925c8-d035-4236-a95f-4616aa90d818","Type":"ContainerDied","Data":"aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547"} Jan 29 16:37:21 crc kubenswrapper[4895]: E0129 16:37:21.486719 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:37:21 crc kubenswrapper[4895]: E0129 16:37:21.487422 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bx9pm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lcblh_openshift-marketplace(d68925c8-d035-4236-a95f-4616aa90d818): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:37:21 crc kubenswrapper[4895]: E0129 16:37:21.489746 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-lcblh" podUID="d68925c8-d035-4236-a95f-4616aa90d818" Jan 29 16:37:22 crc kubenswrapper[4895]: E0129 16:37:22.361747 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lcblh" podUID="d68925c8-d035-4236-a95f-4616aa90d818" Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.251964 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tlppq"] Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.254345 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.287572 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlppq"] Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.363488 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6sl7\" (UniqueName: \"kubernetes.io/projected/d4daf356-0b44-4180-8e33-7b4048813006-kube-api-access-l6sl7\") pod \"community-operators-tlppq\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.363562 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-catalog-content\") pod \"community-operators-tlppq\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.363719 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-utilities\") pod \"community-operators-tlppq\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.465843 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-utilities\") pod \"community-operators-tlppq\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.465968 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6sl7\" (UniqueName: \"kubernetes.io/projected/d4daf356-0b44-4180-8e33-7b4048813006-kube-api-access-l6sl7\") pod \"community-operators-tlppq\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.465992 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-catalog-content\") pod \"community-operators-tlppq\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.466405 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-utilities\") pod \"community-operators-tlppq\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.466456 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-catalog-content\") pod \"community-operators-tlppq\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.505705 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6sl7\" (UniqueName: \"kubernetes.io/projected/d4daf356-0b44-4180-8e33-7b4048813006-kube-api-access-l6sl7\") pod \"community-operators-tlppq\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:37:23 crc kubenswrapper[4895]: I0129 16:37:23.587492 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:37:24 crc kubenswrapper[4895]: I0129 16:37:24.185510 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tlppq"] Jan 29 16:37:24 crc kubenswrapper[4895]: W0129 16:37:24.197961 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4daf356_0b44_4180_8e33_7b4048813006.slice/crio-08032110b4d110c5c1d9ded5f61ffff32b06f07af433d01d1cdcf5944c7aaac1 WatchSource:0}: Error finding container 08032110b4d110c5c1d9ded5f61ffff32b06f07af433d01d1cdcf5944c7aaac1: Status 404 returned error can't find the container with id 08032110b4d110c5c1d9ded5f61ffff32b06f07af433d01d1cdcf5944c7aaac1 Jan 29 16:37:24 crc kubenswrapper[4895]: I0129 16:37:24.386320 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlppq" event={"ID":"d4daf356-0b44-4180-8e33-7b4048813006","Type":"ContainerStarted","Data":"08032110b4d110c5c1d9ded5f61ffff32b06f07af433d01d1cdcf5944c7aaac1"} Jan 29 16:37:25 crc kubenswrapper[4895]: E0129 16:37:25.063822 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:37:25 crc kubenswrapper[4895]: I0129 16:37:25.415505 4895 generic.go:334] "Generic (PLEG): container finished" podID="d4daf356-0b44-4180-8e33-7b4048813006" containerID="83740032842f719ccdd0a03fa3c51b043666ad966818548ef8d21b9d92c5b4bd" exitCode=0 Jan 29 16:37:25 crc kubenswrapper[4895]: I0129 16:37:25.415573 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlppq" event={"ID":"d4daf356-0b44-4180-8e33-7b4048813006","Type":"ContainerDied","Data":"83740032842f719ccdd0a03fa3c51b043666ad966818548ef8d21b9d92c5b4bd"} Jan 29 16:37:25 crc kubenswrapper[4895]: E0129 16:37:25.542885 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:37:25 crc kubenswrapper[4895]: E0129 16:37:25.543097 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6sl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tlppq_openshift-marketplace(d4daf356-0b44-4180-8e33-7b4048813006): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:37:25 crc kubenswrapper[4895]: E0129 16:37:25.544334 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-tlppq" podUID="d4daf356-0b44-4180-8e33-7b4048813006" Jan 29 16:37:26 crc kubenswrapper[4895]: E0129 16:37:26.429718 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tlppq" podUID="d4daf356-0b44-4180-8e33-7b4048813006" Jan 29 16:37:30 crc kubenswrapper[4895]: E0129 16:37:30.039430 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-92qm2" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" Jan 29 16:37:36 crc kubenswrapper[4895]: E0129 16:37:36.180228 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:37:36 crc kubenswrapper[4895]: E0129 16:37:36.181281 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bx9pm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lcblh_openshift-marketplace(d68925c8-d035-4236-a95f-4616aa90d818): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:37:36 crc kubenswrapper[4895]: E0129 16:37:36.182581 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-lcblh" podUID="d68925c8-d035-4236-a95f-4616aa90d818" Jan 29 16:37:37 crc kubenswrapper[4895]: E0129 16:37:37.054836 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:37:40 crc kubenswrapper[4895]: E0129 16:37:40.160357 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:37:40 crc kubenswrapper[4895]: E0129 16:37:40.161088 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6sl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tlppq_openshift-marketplace(d4daf356-0b44-4180-8e33-7b4048813006): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:37:40 crc kubenswrapper[4895]: E0129 16:37:40.162327 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-tlppq" podUID="d4daf356-0b44-4180-8e33-7b4048813006" Jan 29 16:37:43 crc kubenswrapper[4895]: E0129 16:37:43.166251 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:37:43 crc kubenswrapper[4895]: E0129 16:37:43.166943 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv594,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-92qm2_openshift-marketplace(e1a5f19e-38ec-4e07-91f9-37abeda873f5): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:37:43 crc kubenswrapper[4895]: E0129 16:37:43.168141 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-92qm2" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" Jan 29 16:37:48 crc kubenswrapper[4895]: E0129 16:37:48.042683 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lcblh" podUID="d68925c8-d035-4236-a95f-4616aa90d818" Jan 29 16:37:49 crc kubenswrapper[4895]: E0129 16:37:49.038552 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:37:52 crc kubenswrapper[4895]: E0129 16:37:52.041221 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tlppq" podUID="d4daf356-0b44-4180-8e33-7b4048813006" Jan 29 16:37:57 crc kubenswrapper[4895]: E0129 16:37:57.053324 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-92qm2" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" Jan 29 16:37:58 crc kubenswrapper[4895]: I0129 16:37:58.864139 4895 scope.go:117] "RemoveContainer" containerID="459fcad5ecf9ce397403ef7183604d522b848035382206c6922cc2c7fb019031" Jan 29 16:38:02 crc kubenswrapper[4895]: I0129 16:38:02.882708 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcblh" event={"ID":"d68925c8-d035-4236-a95f-4616aa90d818","Type":"ContainerStarted","Data":"d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797"} Jan 29 16:38:03 crc kubenswrapper[4895]: E0129 16:38:03.039567 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:38:03 crc kubenswrapper[4895]: I0129 16:38:03.898764 4895 generic.go:334] "Generic (PLEG): container finished" podID="d68925c8-d035-4236-a95f-4616aa90d818" containerID="d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797" exitCode=0 Jan 29 16:38:03 crc kubenswrapper[4895]: I0129 16:38:03.898829 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcblh" event={"ID":"d68925c8-d035-4236-a95f-4616aa90d818","Type":"ContainerDied","Data":"d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797"} Jan 29 16:38:05 crc kubenswrapper[4895]: I0129 16:38:05.934644 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcblh" event={"ID":"d68925c8-d035-4236-a95f-4616aa90d818","Type":"ContainerStarted","Data":"7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532"} Jan 29 16:38:05 crc kubenswrapper[4895]: I0129 16:38:05.967479 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lcblh" podStartSLOduration=4.539771094 podStartE2EDuration="47.967449917s" podCreationTimestamp="2026-01-29 16:37:18 +0000 UTC" firstStartedPulling="2026-01-29 16:37:21.34830307 +0000 UTC m=+1525.151280344" lastFinishedPulling="2026-01-29 16:38:04.775981903 +0000 UTC m=+1568.578959167" observedRunningTime="2026-01-29 16:38:05.959519833 +0000 UTC m=+1569.762497117" watchObservedRunningTime="2026-01-29 16:38:05.967449917 +0000 UTC m=+1569.770427181" Jan 29 16:38:09 crc kubenswrapper[4895]: E0129 16:38:09.040610 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-92qm2" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" Jan 29 16:38:09 crc kubenswrapper[4895]: I0129 16:38:09.483463 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:38:09 crc kubenswrapper[4895]: I0129 16:38:09.483824 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:38:09 crc kubenswrapper[4895]: I0129 16:38:09.544274 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:38:11 crc kubenswrapper[4895]: I0129 16:38:11.032882 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:38:11 crc kubenswrapper[4895]: I0129 16:38:11.105292 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lcblh"] Jan 29 16:38:11 crc kubenswrapper[4895]: I0129 16:38:11.994216 4895 generic.go:334] "Generic (PLEG): container finished" podID="d4daf356-0b44-4180-8e33-7b4048813006" containerID="4df2e9ea016980ac208264d8771a96e6872c6f25e3ca50be953e602b0b410c46" exitCode=0 Jan 29 16:38:11 crc kubenswrapper[4895]: I0129 16:38:11.994365 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlppq" event={"ID":"d4daf356-0b44-4180-8e33-7b4048813006","Type":"ContainerDied","Data":"4df2e9ea016980ac208264d8771a96e6872c6f25e3ca50be953e602b0b410c46"} Jan 29 16:38:13 crc kubenswrapper[4895]: I0129 16:38:13.004901 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lcblh" podUID="d68925c8-d035-4236-a95f-4616aa90d818" containerName="registry-server" containerID="cri-o://7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532" gracePeriod=2 Jan 29 16:38:14 crc kubenswrapper[4895]: I0129 16:38:14.022010 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlppq" event={"ID":"d4daf356-0b44-4180-8e33-7b4048813006","Type":"ContainerStarted","Data":"7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47"} Jan 29 16:38:14 crc kubenswrapper[4895]: I0129 16:38:14.774033 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:38:14 crc kubenswrapper[4895]: I0129 16:38:14.894488 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-utilities\") pod \"d68925c8-d035-4236-a95f-4616aa90d818\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " Jan 29 16:38:14 crc kubenswrapper[4895]: I0129 16:38:14.895139 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx9pm\" (UniqueName: \"kubernetes.io/projected/d68925c8-d035-4236-a95f-4616aa90d818-kube-api-access-bx9pm\") pod \"d68925c8-d035-4236-a95f-4616aa90d818\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " Jan 29 16:38:14 crc kubenswrapper[4895]: I0129 16:38:14.895255 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-catalog-content\") pod \"d68925c8-d035-4236-a95f-4616aa90d818\" (UID: \"d68925c8-d035-4236-a95f-4616aa90d818\") " Jan 29 16:38:14 crc kubenswrapper[4895]: I0129 16:38:14.895996 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-utilities" (OuterVolumeSpecName: "utilities") pod "d68925c8-d035-4236-a95f-4616aa90d818" (UID: "d68925c8-d035-4236-a95f-4616aa90d818"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:14 crc kubenswrapper[4895]: I0129 16:38:14.906149 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68925c8-d035-4236-a95f-4616aa90d818-kube-api-access-bx9pm" (OuterVolumeSpecName: "kube-api-access-bx9pm") pod "d68925c8-d035-4236-a95f-4616aa90d818" (UID: "d68925c8-d035-4236-a95f-4616aa90d818"). InnerVolumeSpecName "kube-api-access-bx9pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:14 crc kubenswrapper[4895]: I0129 16:38:14.946716 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d68925c8-d035-4236-a95f-4616aa90d818" (UID: "d68925c8-d035-4236-a95f-4616aa90d818"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:14 crc kubenswrapper[4895]: I0129 16:38:14.997558 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:14 crc kubenswrapper[4895]: I0129 16:38:14.997593 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68925c8-d035-4236-a95f-4616aa90d818-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:14 crc kubenswrapper[4895]: I0129 16:38:14.997603 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx9pm\" (UniqueName: \"kubernetes.io/projected/d68925c8-d035-4236-a95f-4616aa90d818-kube-api-access-bx9pm\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.034067 4895 generic.go:334] "Generic (PLEG): container finished" podID="d68925c8-d035-4236-a95f-4616aa90d818" containerID="7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532" exitCode=0 Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.034134 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcblh" Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.034197 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcblh" event={"ID":"d68925c8-d035-4236-a95f-4616aa90d818","Type":"ContainerDied","Data":"7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532"} Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.034257 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcblh" event={"ID":"d68925c8-d035-4236-a95f-4616aa90d818","Type":"ContainerDied","Data":"8a5c1be590d699696c80fa253a651ae658f8d7e770cd85cfecd796ada65f39b1"} Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.034283 4895 scope.go:117] "RemoveContainer" containerID="7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532" Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.064225 4895 scope.go:117] "RemoveContainer" containerID="d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797" Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.071768 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tlppq" podStartSLOduration=3.737771805 podStartE2EDuration="52.071741068s" podCreationTimestamp="2026-01-29 16:37:23 +0000 UTC" firstStartedPulling="2026-01-29 16:37:25.419048473 +0000 UTC m=+1529.222025737" lastFinishedPulling="2026-01-29 16:38:13.753017736 +0000 UTC m=+1577.555995000" observedRunningTime="2026-01-29 16:38:15.0600221 +0000 UTC m=+1578.862999384" watchObservedRunningTime="2026-01-29 16:38:15.071741068 +0000 UTC m=+1578.874718332" Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.104013 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lcblh"] Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.110814 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lcblh"] Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.114719 4895 scope.go:117] "RemoveContainer" containerID="aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547" Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.142997 4895 scope.go:117] "RemoveContainer" containerID="7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532" Jan 29 16:38:15 crc kubenswrapper[4895]: E0129 16:38:15.143816 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532\": container with ID starting with 7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532 not found: ID does not exist" containerID="7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532" Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.143892 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532"} err="failed to get container status \"7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532\": rpc error: code = NotFound desc = could not find container \"7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532\": container with ID starting with 7b84e17f7c752c75127fb01e7060821807a7325dd7381962e5a82995e7d3e532 not found: ID does not exist" Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.143937 4895 scope.go:117] "RemoveContainer" containerID="d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797" Jan 29 16:38:15 crc kubenswrapper[4895]: E0129 16:38:15.144280 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797\": container with ID starting with d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797 not found: ID does not exist" containerID="d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797" Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.144307 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797"} err="failed to get container status \"d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797\": rpc error: code = NotFound desc = could not find container \"d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797\": container with ID starting with d52cf2e4c2cd267e82911af5f9c49122d04e9c7b0f79a511347230f537aab797 not found: ID does not exist" Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.144321 4895 scope.go:117] "RemoveContainer" containerID="aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547" Jan 29 16:38:15 crc kubenswrapper[4895]: E0129 16:38:15.144538 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547\": container with ID starting with aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547 not found: ID does not exist" containerID="aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547" Jan 29 16:38:15 crc kubenswrapper[4895]: I0129 16:38:15.144558 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547"} err="failed to get container status \"aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547\": rpc error: code = NotFound desc = could not find container \"aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547\": container with ID starting with aabee41977796bc86a89e98046e89fede2a943fce3ee59cbd9068ac61b63f547 not found: ID does not exist" Jan 29 16:38:17 crc kubenswrapper[4895]: I0129 16:38:17.051512 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d68925c8-d035-4236-a95f-4616aa90d818" path="/var/lib/kubelet/pods/d68925c8-d035-4236-a95f-4616aa90d818/volumes" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.034709 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2f2ds"] Jan 29 16:38:18 crc kubenswrapper[4895]: E0129 16:38:18.035701 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68925c8-d035-4236-a95f-4616aa90d818" containerName="extract-content" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.035736 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68925c8-d035-4236-a95f-4616aa90d818" containerName="extract-content" Jan 29 16:38:18 crc kubenswrapper[4895]: E0129 16:38:18.035758 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68925c8-d035-4236-a95f-4616aa90d818" containerName="registry-server" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.035769 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68925c8-d035-4236-a95f-4616aa90d818" containerName="registry-server" Jan 29 16:38:18 crc kubenswrapper[4895]: E0129 16:38:18.035793 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68925c8-d035-4236-a95f-4616aa90d818" containerName="extract-utilities" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.035804 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68925c8-d035-4236-a95f-4616aa90d818" containerName="extract-utilities" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.036336 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d68925c8-d035-4236-a95f-4616aa90d818" containerName="registry-server" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.039809 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:18 crc kubenswrapper[4895]: E0129 16:38:18.044709 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.045916 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f2ds"] Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.165809 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj8pz\" (UniqueName: \"kubernetes.io/projected/2347a0ef-6f44-4f0b-a24d-639b38d71e12-kube-api-access-jj8pz\") pod \"redhat-marketplace-2f2ds\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.165944 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-utilities\") pod \"redhat-marketplace-2f2ds\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.166083 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-catalog-content\") pod \"redhat-marketplace-2f2ds\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.268074 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj8pz\" (UniqueName: \"kubernetes.io/projected/2347a0ef-6f44-4f0b-a24d-639b38d71e12-kube-api-access-jj8pz\") pod \"redhat-marketplace-2f2ds\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.268135 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-utilities\") pod \"redhat-marketplace-2f2ds\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.268183 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-catalog-content\") pod \"redhat-marketplace-2f2ds\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.268755 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-utilities\") pod \"redhat-marketplace-2f2ds\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.268844 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-catalog-content\") pod \"redhat-marketplace-2f2ds\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.291168 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj8pz\" (UniqueName: \"kubernetes.io/projected/2347a0ef-6f44-4f0b-a24d-639b38d71e12-kube-api-access-jj8pz\") pod \"redhat-marketplace-2f2ds\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.367161 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:18 crc kubenswrapper[4895]: I0129 16:38:18.852158 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f2ds"] Jan 29 16:38:18 crc kubenswrapper[4895]: W0129 16:38:18.853933 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2347a0ef_6f44_4f0b_a24d_639b38d71e12.slice/crio-31bc4da9359b4609a3c52a4b70ada33057d77483bbf53b049b55d067ff9d5e44 WatchSource:0}: Error finding container 31bc4da9359b4609a3c52a4b70ada33057d77483bbf53b049b55d067ff9d5e44: Status 404 returned error can't find the container with id 31bc4da9359b4609a3c52a4b70ada33057d77483bbf53b049b55d067ff9d5e44 Jan 29 16:38:19 crc kubenswrapper[4895]: I0129 16:38:19.084694 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f2ds" event={"ID":"2347a0ef-6f44-4f0b-a24d-639b38d71e12","Type":"ContainerStarted","Data":"31bc4da9359b4609a3c52a4b70ada33057d77483bbf53b049b55d067ff9d5e44"} Jan 29 16:38:20 crc kubenswrapper[4895]: I0129 16:38:20.117629 4895 generic.go:334] "Generic (PLEG): container finished" podID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" containerID="355c20c9ecc2dded0877c0a4fdda4af369e5276ee348a380cfe41bfa85f4623b" exitCode=0 Jan 29 16:38:20 crc kubenswrapper[4895]: I0129 16:38:20.117705 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f2ds" event={"ID":"2347a0ef-6f44-4f0b-a24d-639b38d71e12","Type":"ContainerDied","Data":"355c20c9ecc2dded0877c0a4fdda4af369e5276ee348a380cfe41bfa85f4623b"} Jan 29 16:38:21 crc kubenswrapper[4895]: E0129 16:38:21.109046 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-92qm2" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" Jan 29 16:38:22 crc kubenswrapper[4895]: I0129 16:38:22.140028 4895 generic.go:334] "Generic (PLEG): container finished" podID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" containerID="0e65a27f682ec82730e65663789f14f32f2e4216b7b4d6f44841117d40b8fbcd" exitCode=0 Jan 29 16:38:22 crc kubenswrapper[4895]: I0129 16:38:22.140126 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f2ds" event={"ID":"2347a0ef-6f44-4f0b-a24d-639b38d71e12","Type":"ContainerDied","Data":"0e65a27f682ec82730e65663789f14f32f2e4216b7b4d6f44841117d40b8fbcd"} Jan 29 16:38:23 crc kubenswrapper[4895]: I0129 16:38:23.152390 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f2ds" event={"ID":"2347a0ef-6f44-4f0b-a24d-639b38d71e12","Type":"ContainerStarted","Data":"c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a"} Jan 29 16:38:23 crc kubenswrapper[4895]: I0129 16:38:23.178520 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2f2ds" podStartSLOduration=2.609173538 podStartE2EDuration="5.178490746s" podCreationTimestamp="2026-01-29 16:38:18 +0000 UTC" firstStartedPulling="2026-01-29 16:38:20.121050936 +0000 UTC m=+1583.924028200" lastFinishedPulling="2026-01-29 16:38:22.690368144 +0000 UTC m=+1586.493345408" observedRunningTime="2026-01-29 16:38:23.174478508 +0000 UTC m=+1586.977455772" watchObservedRunningTime="2026-01-29 16:38:23.178490746 +0000 UTC m=+1586.981468020" Jan 29 16:38:23 crc kubenswrapper[4895]: I0129 16:38:23.587763 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:38:23 crc kubenswrapper[4895]: I0129 16:38:23.587833 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:38:23 crc kubenswrapper[4895]: I0129 16:38:23.655481 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:38:24 crc kubenswrapper[4895]: I0129 16:38:24.248062 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:38:25 crc kubenswrapper[4895]: I0129 16:38:25.217481 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlppq"] Jan 29 16:38:26 crc kubenswrapper[4895]: I0129 16:38:26.202135 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tlppq" podUID="d4daf356-0b44-4180-8e33-7b4048813006" containerName="registry-server" containerID="cri-o://7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47" gracePeriod=2 Jan 29 16:38:26 crc kubenswrapper[4895]: E0129 16:38:26.453853 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4daf356_0b44_4180_8e33_7b4048813006.slice/crio-conmon-7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4daf356_0b44_4180_8e33_7b4048813006.slice/crio-7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:38:26 crc kubenswrapper[4895]: I0129 16:38:26.711121 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:38:26 crc kubenswrapper[4895]: I0129 16:38:26.768858 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-catalog-content\") pod \"d4daf356-0b44-4180-8e33-7b4048813006\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " Jan 29 16:38:26 crc kubenswrapper[4895]: I0129 16:38:26.769157 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-utilities\") pod \"d4daf356-0b44-4180-8e33-7b4048813006\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " Jan 29 16:38:26 crc kubenswrapper[4895]: I0129 16:38:26.769324 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6sl7\" (UniqueName: \"kubernetes.io/projected/d4daf356-0b44-4180-8e33-7b4048813006-kube-api-access-l6sl7\") pod \"d4daf356-0b44-4180-8e33-7b4048813006\" (UID: \"d4daf356-0b44-4180-8e33-7b4048813006\") " Jan 29 16:38:26 crc kubenswrapper[4895]: I0129 16:38:26.771159 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-utilities" (OuterVolumeSpecName: "utilities") pod "d4daf356-0b44-4180-8e33-7b4048813006" (UID: "d4daf356-0b44-4180-8e33-7b4048813006"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:26 crc kubenswrapper[4895]: I0129 16:38:26.779401 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4daf356-0b44-4180-8e33-7b4048813006-kube-api-access-l6sl7" (OuterVolumeSpecName: "kube-api-access-l6sl7") pod "d4daf356-0b44-4180-8e33-7b4048813006" (UID: "d4daf356-0b44-4180-8e33-7b4048813006"). InnerVolumeSpecName "kube-api-access-l6sl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:26 crc kubenswrapper[4895]: I0129 16:38:26.834980 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4daf356-0b44-4180-8e33-7b4048813006" (UID: "d4daf356-0b44-4180-8e33-7b4048813006"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:26 crc kubenswrapper[4895]: I0129 16:38:26.872076 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:26 crc kubenswrapper[4895]: I0129 16:38:26.872132 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6sl7\" (UniqueName: \"kubernetes.io/projected/d4daf356-0b44-4180-8e33-7b4048813006-kube-api-access-l6sl7\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:26 crc kubenswrapper[4895]: I0129 16:38:26.872145 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4daf356-0b44-4180-8e33-7b4048813006-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.225126 4895 generic.go:334] "Generic (PLEG): container finished" podID="d4daf356-0b44-4180-8e33-7b4048813006" containerID="7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47" exitCode=0 Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.225277 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tlppq" Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.225297 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlppq" event={"ID":"d4daf356-0b44-4180-8e33-7b4048813006","Type":"ContainerDied","Data":"7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47"} Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.226268 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tlppq" event={"ID":"d4daf356-0b44-4180-8e33-7b4048813006","Type":"ContainerDied","Data":"08032110b4d110c5c1d9ded5f61ffff32b06f07af433d01d1cdcf5944c7aaac1"} Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.226305 4895 scope.go:117] "RemoveContainer" containerID="7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47" Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.270945 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tlppq"] Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.273029 4895 scope.go:117] "RemoveContainer" containerID="4df2e9ea016980ac208264d8771a96e6872c6f25e3ca50be953e602b0b410c46" Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.323852 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tlppq"] Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.331200 4895 scope.go:117] "RemoveContainer" containerID="83740032842f719ccdd0a03fa3c51b043666ad966818548ef8d21b9d92c5b4bd" Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.351722 4895 scope.go:117] "RemoveContainer" containerID="7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47" Jan 29 16:38:27 crc kubenswrapper[4895]: E0129 16:38:27.354553 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47\": container with ID starting with 7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47 not found: ID does not exist" containerID="7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47" Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.354607 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47"} err="failed to get container status \"7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47\": rpc error: code = NotFound desc = could not find container \"7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47\": container with ID starting with 7c93b01ed64ec6fafbe985836ec672f9b671d384f491e4fab6e74eafe8357e47 not found: ID does not exist" Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.354644 4895 scope.go:117] "RemoveContainer" containerID="4df2e9ea016980ac208264d8771a96e6872c6f25e3ca50be953e602b0b410c46" Jan 29 16:38:27 crc kubenswrapper[4895]: E0129 16:38:27.355211 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df2e9ea016980ac208264d8771a96e6872c6f25e3ca50be953e602b0b410c46\": container with ID starting with 4df2e9ea016980ac208264d8771a96e6872c6f25e3ca50be953e602b0b410c46 not found: ID does not exist" containerID="4df2e9ea016980ac208264d8771a96e6872c6f25e3ca50be953e602b0b410c46" Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.355279 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df2e9ea016980ac208264d8771a96e6872c6f25e3ca50be953e602b0b410c46"} err="failed to get container status \"4df2e9ea016980ac208264d8771a96e6872c6f25e3ca50be953e602b0b410c46\": rpc error: code = NotFound desc = could not find container \"4df2e9ea016980ac208264d8771a96e6872c6f25e3ca50be953e602b0b410c46\": container with ID starting with 4df2e9ea016980ac208264d8771a96e6872c6f25e3ca50be953e602b0b410c46 not found: ID does not exist" Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.355322 4895 scope.go:117] "RemoveContainer" containerID="83740032842f719ccdd0a03fa3c51b043666ad966818548ef8d21b9d92c5b4bd" Jan 29 16:38:27 crc kubenswrapper[4895]: E0129 16:38:27.355722 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83740032842f719ccdd0a03fa3c51b043666ad966818548ef8d21b9d92c5b4bd\": container with ID starting with 83740032842f719ccdd0a03fa3c51b043666ad966818548ef8d21b9d92c5b4bd not found: ID does not exist" containerID="83740032842f719ccdd0a03fa3c51b043666ad966818548ef8d21b9d92c5b4bd" Jan 29 16:38:27 crc kubenswrapper[4895]: I0129 16:38:27.355754 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83740032842f719ccdd0a03fa3c51b043666ad966818548ef8d21b9d92c5b4bd"} err="failed to get container status \"83740032842f719ccdd0a03fa3c51b043666ad966818548ef8d21b9d92c5b4bd\": rpc error: code = NotFound desc = could not find container \"83740032842f719ccdd0a03fa3c51b043666ad966818548ef8d21b9d92c5b4bd\": container with ID starting with 83740032842f719ccdd0a03fa3c51b043666ad966818548ef8d21b9d92c5b4bd not found: ID does not exist" Jan 29 16:38:28 crc kubenswrapper[4895]: I0129 16:38:28.367343 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:28 crc kubenswrapper[4895]: I0129 16:38:28.367414 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:28 crc kubenswrapper[4895]: I0129 16:38:28.415990 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:29 crc kubenswrapper[4895]: E0129 16:38:29.040135 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:38:29 crc kubenswrapper[4895]: I0129 16:38:29.049535 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4daf356-0b44-4180-8e33-7b4048813006" path="/var/lib/kubelet/pods/d4daf356-0b44-4180-8e33-7b4048813006/volumes" Jan 29 16:38:29 crc kubenswrapper[4895]: I0129 16:38:29.298617 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:29 crc kubenswrapper[4895]: I0129 16:38:29.617013 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f2ds"] Jan 29 16:38:31 crc kubenswrapper[4895]: I0129 16:38:31.269991 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2f2ds" podUID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" containerName="registry-server" containerID="cri-o://c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a" gracePeriod=2 Jan 29 16:38:31 crc kubenswrapper[4895]: I0129 16:38:31.798332 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:31 crc kubenswrapper[4895]: I0129 16:38:31.908921 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-utilities\") pod \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " Jan 29 16:38:31 crc kubenswrapper[4895]: I0129 16:38:31.909106 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj8pz\" (UniqueName: \"kubernetes.io/projected/2347a0ef-6f44-4f0b-a24d-639b38d71e12-kube-api-access-jj8pz\") pod \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " Jan 29 16:38:31 crc kubenswrapper[4895]: I0129 16:38:31.909295 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-catalog-content\") pod \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\" (UID: \"2347a0ef-6f44-4f0b-a24d-639b38d71e12\") " Jan 29 16:38:31 crc kubenswrapper[4895]: I0129 16:38:31.910376 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-utilities" (OuterVolumeSpecName: "utilities") pod "2347a0ef-6f44-4f0b-a24d-639b38d71e12" (UID: "2347a0ef-6f44-4f0b-a24d-639b38d71e12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:31 crc kubenswrapper[4895]: I0129 16:38:31.911408 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:31 crc kubenswrapper[4895]: I0129 16:38:31.926333 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2347a0ef-6f44-4f0b-a24d-639b38d71e12-kube-api-access-jj8pz" (OuterVolumeSpecName: "kube-api-access-jj8pz") pod "2347a0ef-6f44-4f0b-a24d-639b38d71e12" (UID: "2347a0ef-6f44-4f0b-a24d-639b38d71e12"). InnerVolumeSpecName "kube-api-access-jj8pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:31 crc kubenswrapper[4895]: I0129 16:38:31.940235 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2347a0ef-6f44-4f0b-a24d-639b38d71e12" (UID: "2347a0ef-6f44-4f0b-a24d-639b38d71e12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.012834 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj8pz\" (UniqueName: \"kubernetes.io/projected/2347a0ef-6f44-4f0b-a24d-639b38d71e12-kube-api-access-jj8pz\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.012898 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2347a0ef-6f44-4f0b-a24d-639b38d71e12-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.282327 4895 generic.go:334] "Generic (PLEG): container finished" podID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" containerID="c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a" exitCode=0 Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.282370 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f2ds" event={"ID":"2347a0ef-6f44-4f0b-a24d-639b38d71e12","Type":"ContainerDied","Data":"c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a"} Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.282423 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2f2ds" event={"ID":"2347a0ef-6f44-4f0b-a24d-639b38d71e12","Type":"ContainerDied","Data":"31bc4da9359b4609a3c52a4b70ada33057d77483bbf53b049b55d067ff9d5e44"} Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.282446 4895 scope.go:117] "RemoveContainer" containerID="c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.282546 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2f2ds" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.314188 4895 scope.go:117] "RemoveContainer" containerID="0e65a27f682ec82730e65663789f14f32f2e4216b7b4d6f44841117d40b8fbcd" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.332857 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f2ds"] Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.340994 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2f2ds"] Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.342745 4895 scope.go:117] "RemoveContainer" containerID="355c20c9ecc2dded0877c0a4fdda4af369e5276ee348a380cfe41bfa85f4623b" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.405378 4895 scope.go:117] "RemoveContainer" containerID="c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a" Jan 29 16:38:32 crc kubenswrapper[4895]: E0129 16:38:32.406439 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a\": container with ID starting with c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a not found: ID does not exist" containerID="c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.406493 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a"} err="failed to get container status \"c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a\": rpc error: code = NotFound desc = could not find container \"c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a\": container with ID starting with c68ccc01fe5b9e3249b7f2ddc4df9356ffc21bca01e89a90cbeff4b9160d139a not found: ID does not exist" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.406529 4895 scope.go:117] "RemoveContainer" containerID="0e65a27f682ec82730e65663789f14f32f2e4216b7b4d6f44841117d40b8fbcd" Jan 29 16:38:32 crc kubenswrapper[4895]: E0129 16:38:32.407463 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e65a27f682ec82730e65663789f14f32f2e4216b7b4d6f44841117d40b8fbcd\": container with ID starting with 0e65a27f682ec82730e65663789f14f32f2e4216b7b4d6f44841117d40b8fbcd not found: ID does not exist" containerID="0e65a27f682ec82730e65663789f14f32f2e4216b7b4d6f44841117d40b8fbcd" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.407504 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e65a27f682ec82730e65663789f14f32f2e4216b7b4d6f44841117d40b8fbcd"} err="failed to get container status \"0e65a27f682ec82730e65663789f14f32f2e4216b7b4d6f44841117d40b8fbcd\": rpc error: code = NotFound desc = could not find container \"0e65a27f682ec82730e65663789f14f32f2e4216b7b4d6f44841117d40b8fbcd\": container with ID starting with 0e65a27f682ec82730e65663789f14f32f2e4216b7b4d6f44841117d40b8fbcd not found: ID does not exist" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.407532 4895 scope.go:117] "RemoveContainer" containerID="355c20c9ecc2dded0877c0a4fdda4af369e5276ee348a380cfe41bfa85f4623b" Jan 29 16:38:32 crc kubenswrapper[4895]: E0129 16:38:32.407885 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"355c20c9ecc2dded0877c0a4fdda4af369e5276ee348a380cfe41bfa85f4623b\": container with ID starting with 355c20c9ecc2dded0877c0a4fdda4af369e5276ee348a380cfe41bfa85f4623b not found: ID does not exist" containerID="355c20c9ecc2dded0877c0a4fdda4af369e5276ee348a380cfe41bfa85f4623b" Jan 29 16:38:32 crc kubenswrapper[4895]: I0129 16:38:32.407942 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355c20c9ecc2dded0877c0a4fdda4af369e5276ee348a380cfe41bfa85f4623b"} err="failed to get container status \"355c20c9ecc2dded0877c0a4fdda4af369e5276ee348a380cfe41bfa85f4623b\": rpc error: code = NotFound desc = could not find container \"355c20c9ecc2dded0877c0a4fdda4af369e5276ee348a380cfe41bfa85f4623b\": container with ID starting with 355c20c9ecc2dded0877c0a4fdda4af369e5276ee348a380cfe41bfa85f4623b not found: ID does not exist" Jan 29 16:38:33 crc kubenswrapper[4895]: I0129 16:38:33.048187 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" path="/var/lib/kubelet/pods/2347a0ef-6f44-4f0b-a24d-639b38d71e12/volumes" Jan 29 16:38:37 crc kubenswrapper[4895]: I0129 16:38:37.340914 4895 generic.go:334] "Generic (PLEG): container finished" podID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerID="cf6a9617fd7d553774791cf2fafde5e0538114dc2e2ee09a8987502844edc140" exitCode=0 Jan 29 16:38:37 crc kubenswrapper[4895]: I0129 16:38:37.340951 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92qm2" event={"ID":"e1a5f19e-38ec-4e07-91f9-37abeda873f5","Type":"ContainerDied","Data":"cf6a9617fd7d553774791cf2fafde5e0538114dc2e2ee09a8987502844edc140"} Jan 29 16:38:39 crc kubenswrapper[4895]: I0129 16:38:39.371809 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92qm2" event={"ID":"e1a5f19e-38ec-4e07-91f9-37abeda873f5","Type":"ContainerStarted","Data":"323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0"} Jan 29 16:38:39 crc kubenswrapper[4895]: I0129 16:38:39.410880 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-92qm2" podStartSLOduration=3.083003161 podStartE2EDuration="1m38.410830679s" podCreationTimestamp="2026-01-29 16:37:01 +0000 UTC" firstStartedPulling="2026-01-29 16:37:03.161011686 +0000 UTC m=+1506.963988950" lastFinishedPulling="2026-01-29 16:38:38.488839204 +0000 UTC m=+1602.291816468" observedRunningTime="2026-01-29 16:38:39.400462279 +0000 UTC m=+1603.203439543" watchObservedRunningTime="2026-01-29 16:38:39.410830679 +0000 UTC m=+1603.213807953" Jan 29 16:38:41 crc kubenswrapper[4895]: I0129 16:38:41.956708 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:38:41 crc kubenswrapper[4895]: I0129 16:38:41.958176 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:38:42 crc kubenswrapper[4895]: E0129 16:38:42.040751 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" Jan 29 16:38:43 crc kubenswrapper[4895]: I0129 16:38:43.034750 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-92qm2" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerName="registry-server" probeResult="failure" output=< Jan 29 16:38:43 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 16:38:43 crc kubenswrapper[4895]: > Jan 29 16:38:52 crc kubenswrapper[4895]: I0129 16:38:52.017644 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:38:52 crc kubenswrapper[4895]: I0129 16:38:52.077712 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:38:52 crc kubenswrapper[4895]: I0129 16:38:52.262713 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-92qm2"] Jan 29 16:38:53 crc kubenswrapper[4895]: I0129 16:38:53.528295 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-92qm2" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerName="registry-server" containerID="cri-o://323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0" gracePeriod=2 Jan 29 16:38:53 crc kubenswrapper[4895]: I0129 16:38:53.973111 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.112717 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-utilities\") pod \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.112886 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-catalog-content\") pod \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.112945 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv594\" (UniqueName: \"kubernetes.io/projected/e1a5f19e-38ec-4e07-91f9-37abeda873f5-kube-api-access-xv594\") pod \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\" (UID: \"e1a5f19e-38ec-4e07-91f9-37abeda873f5\") " Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.115809 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-utilities" (OuterVolumeSpecName: "utilities") pod "e1a5f19e-38ec-4e07-91f9-37abeda873f5" (UID: "e1a5f19e-38ec-4e07-91f9-37abeda873f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.130192 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a5f19e-38ec-4e07-91f9-37abeda873f5-kube-api-access-xv594" (OuterVolumeSpecName: "kube-api-access-xv594") pod "e1a5f19e-38ec-4e07-91f9-37abeda873f5" (UID: "e1a5f19e-38ec-4e07-91f9-37abeda873f5"). InnerVolumeSpecName "kube-api-access-xv594". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.217371 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv594\" (UniqueName: \"kubernetes.io/projected/e1a5f19e-38ec-4e07-91f9-37abeda873f5-kube-api-access-xv594\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.217418 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.318337 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1a5f19e-38ec-4e07-91f9-37abeda873f5" (UID: "e1a5f19e-38ec-4e07-91f9-37abeda873f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.320121 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a5f19e-38ec-4e07-91f9-37abeda873f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.543755 4895 generic.go:334] "Generic (PLEG): container finished" podID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerID="323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0" exitCode=0 Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.543819 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92qm2" event={"ID":"e1a5f19e-38ec-4e07-91f9-37abeda873f5","Type":"ContainerDied","Data":"323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0"} Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.543881 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-92qm2" event={"ID":"e1a5f19e-38ec-4e07-91f9-37abeda873f5","Type":"ContainerDied","Data":"53fa52ac36ebab240d35a614ec5ea5a5bd1033ed9a57708b8b207dbfe9489db1"} Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.543909 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-92qm2" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.543924 4895 scope.go:117] "RemoveContainer" containerID="323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.588899 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-92qm2"] Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.597658 4895 scope.go:117] "RemoveContainer" containerID="cf6a9617fd7d553774791cf2fafde5e0538114dc2e2ee09a8987502844edc140" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.632602 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-92qm2"] Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.635394 4895 scope.go:117] "RemoveContainer" containerID="62152aa747c01047703c5948c94613e4939314c52b73363f5ad3b127abb8691d" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.673675 4895 scope.go:117] "RemoveContainer" containerID="323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0" Jan 29 16:38:54 crc kubenswrapper[4895]: E0129 16:38:54.676163 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0\": container with ID starting with 323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0 not found: ID does not exist" containerID="323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.676252 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0"} err="failed to get container status \"323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0\": rpc error: code = NotFound desc = could not find container \"323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0\": container with ID starting with 323894362e135d22d6ac10cae556902c1c69dada30cb8c4a6378c9e1bd9796f0 not found: ID does not exist" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.676294 4895 scope.go:117] "RemoveContainer" containerID="cf6a9617fd7d553774791cf2fafde5e0538114dc2e2ee09a8987502844edc140" Jan 29 16:38:54 crc kubenswrapper[4895]: E0129 16:38:54.677013 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf6a9617fd7d553774791cf2fafde5e0538114dc2e2ee09a8987502844edc140\": container with ID starting with cf6a9617fd7d553774791cf2fafde5e0538114dc2e2ee09a8987502844edc140 not found: ID does not exist" containerID="cf6a9617fd7d553774791cf2fafde5e0538114dc2e2ee09a8987502844edc140" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.677042 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf6a9617fd7d553774791cf2fafde5e0538114dc2e2ee09a8987502844edc140"} err="failed to get container status \"cf6a9617fd7d553774791cf2fafde5e0538114dc2e2ee09a8987502844edc140\": rpc error: code = NotFound desc = could not find container \"cf6a9617fd7d553774791cf2fafde5e0538114dc2e2ee09a8987502844edc140\": container with ID starting with cf6a9617fd7d553774791cf2fafde5e0538114dc2e2ee09a8987502844edc140 not found: ID does not exist" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.677057 4895 scope.go:117] "RemoveContainer" containerID="62152aa747c01047703c5948c94613e4939314c52b73363f5ad3b127abb8691d" Jan 29 16:38:54 crc kubenswrapper[4895]: E0129 16:38:54.677480 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62152aa747c01047703c5948c94613e4939314c52b73363f5ad3b127abb8691d\": container with ID starting with 62152aa747c01047703c5948c94613e4939314c52b73363f5ad3b127abb8691d not found: ID does not exist" containerID="62152aa747c01047703c5948c94613e4939314c52b73363f5ad3b127abb8691d" Jan 29 16:38:54 crc kubenswrapper[4895]: I0129 16:38:54.677510 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62152aa747c01047703c5948c94613e4939314c52b73363f5ad3b127abb8691d"} err="failed to get container status \"62152aa747c01047703c5948c94613e4939314c52b73363f5ad3b127abb8691d\": rpc error: code = NotFound desc = could not find container \"62152aa747c01047703c5948c94613e4939314c52b73363f5ad3b127abb8691d\": container with ID starting with 62152aa747c01047703c5948c94613e4939314c52b73363f5ad3b127abb8691d not found: ID does not exist" Jan 29 16:38:55 crc kubenswrapper[4895]: I0129 16:38:55.047855 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" path="/var/lib/kubelet/pods/e1a5f19e-38ec-4e07-91f9-37abeda873f5/volumes" Jan 29 16:38:57 crc kubenswrapper[4895]: I0129 16:38:57.584438 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b76d8a0a-9395-4b6c-8775-efa0354ace99","Type":"ContainerStarted","Data":"4ecee84e4bfc3958ad78a0525bc05bfa655bda4c8f82981bc7b48299b2362a23"} Jan 29 16:38:57 crc kubenswrapper[4895]: I0129 16:38:57.585630 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:38:57 crc kubenswrapper[4895]: I0129 16:38:57.610449 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.122774683 podStartE2EDuration="6m1.610424557s" podCreationTimestamp="2026-01-29 16:32:56 +0000 UTC" firstStartedPulling="2026-01-29 16:32:57.599285113 +0000 UTC m=+1261.402262377" lastFinishedPulling="2026-01-29 16:38:57.086934987 +0000 UTC m=+1620.889912251" observedRunningTime="2026-01-29 16:38:57.609498402 +0000 UTC m=+1621.412475676" watchObservedRunningTime="2026-01-29 16:38:57.610424557 +0000 UTC m=+1621.413401831" Jan 29 16:38:57 crc kubenswrapper[4895]: I0129 16:38:57.822895 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:38:57 crc kubenswrapper[4895]: I0129 16:38:57.823505 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:39:12 crc kubenswrapper[4895]: I0129 16:39:12.053047 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zzlg4"] Jan 29 16:39:12 crc kubenswrapper[4895]: I0129 16:39:12.062367 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zzlg4"] Jan 29 16:39:13 crc kubenswrapper[4895]: I0129 16:39:13.051923 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2198769e-7dd4-4dbb-8048-93e60289c898" path="/var/lib/kubelet/pods/2198769e-7dd4-4dbb-8048-93e60289c898/volumes" Jan 29 16:39:13 crc kubenswrapper[4895]: I0129 16:39:13.052973 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-g8ww6"] Jan 29 16:39:13 crc kubenswrapper[4895]: I0129 16:39:13.054840 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-g8ww6"] Jan 29 16:39:14 crc kubenswrapper[4895]: I0129 16:39:14.048114 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-140c-account-create-update-5cftn"] Jan 29 16:39:14 crc kubenswrapper[4895]: I0129 16:39:14.064406 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0483-account-create-update-tfx5k"] Jan 29 16:39:14 crc kubenswrapper[4895]: I0129 16:39:14.076943 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-140c-account-create-update-5cftn"] Jan 29 16:39:14 crc kubenswrapper[4895]: I0129 16:39:14.090143 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0483-account-create-update-tfx5k"] Jan 29 16:39:15 crc kubenswrapper[4895]: I0129 16:39:15.055888 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e" path="/var/lib/kubelet/pods/1b0a86fa-38a5-4cf2-b3dd-8d1a9a9f875e/volumes" Jan 29 16:39:15 crc kubenswrapper[4895]: I0129 16:39:15.057744 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c1184b-9bdb-49aa-9cdb-934a29d9875c" path="/var/lib/kubelet/pods/39c1184b-9bdb-49aa-9cdb-934a29d9875c/volumes" Jan 29 16:39:15 crc kubenswrapper[4895]: I0129 16:39:15.058415 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35fe053-f782-4049-bb33-0dc45a1a07aa" path="/var/lib/kubelet/pods/f35fe053-f782-4049-bb33-0dc45a1a07aa/volumes" Jan 29 16:39:19 crc kubenswrapper[4895]: I0129 16:39:19.057296 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0a76-account-create-update-tjmgh"] Jan 29 16:39:19 crc kubenswrapper[4895]: I0129 16:39:19.062495 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0a76-account-create-update-tjmgh"] Jan 29 16:39:19 crc kubenswrapper[4895]: I0129 16:39:19.077133 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-wn525"] Jan 29 16:39:19 crc kubenswrapper[4895]: I0129 16:39:19.087311 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-wn525"] Jan 29 16:39:21 crc kubenswrapper[4895]: I0129 16:39:21.051615 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c50f54-9d03-4873-85e4-1958e9f81a90" path="/var/lib/kubelet/pods/19c50f54-9d03-4873-85e4-1958e9f81a90/volumes" Jan 29 16:39:21 crc kubenswrapper[4895]: I0129 16:39:21.055403 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d710b22-52da-4e44-8483-00d522bdf44e" path="/var/lib/kubelet/pods/3d710b22-52da-4e44-8483-00d522bdf44e/volumes" Jan 29 16:39:27 crc kubenswrapper[4895]: I0129 16:39:27.135006 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 16:39:27 crc kubenswrapper[4895]: I0129 16:39:27.822639 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:39:27 crc kubenswrapper[4895]: I0129 16:39:27.823007 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.231835 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.232582 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f3ac4bcf-7c2a-48e7-9921-3035bdc8f488" containerName="kube-state-metrics" containerID="cri-o://b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb" gracePeriod=30 Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.748272 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.883953 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kktng\" (UniqueName: \"kubernetes.io/projected/f3ac4bcf-7c2a-48e7-9921-3035bdc8f488-kube-api-access-kktng\") pod \"f3ac4bcf-7c2a-48e7-9921-3035bdc8f488\" (UID: \"f3ac4bcf-7c2a-48e7-9921-3035bdc8f488\") " Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.891767 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3ac4bcf-7c2a-48e7-9921-3035bdc8f488-kube-api-access-kktng" (OuterVolumeSpecName: "kube-api-access-kktng") pod "f3ac4bcf-7c2a-48e7-9921-3035bdc8f488" (UID: "f3ac4bcf-7c2a-48e7-9921-3035bdc8f488"). InnerVolumeSpecName "kube-api-access-kktng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.947671 4895 generic.go:334] "Generic (PLEG): container finished" podID="f3ac4bcf-7c2a-48e7-9921-3035bdc8f488" containerID="b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb" exitCode=2 Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.947723 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3ac4bcf-7c2a-48e7-9921-3035bdc8f488","Type":"ContainerDied","Data":"b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb"} Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.947853 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.948567 4895 scope.go:117] "RemoveContainer" containerID="b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb" Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.948548 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3ac4bcf-7c2a-48e7-9921-3035bdc8f488","Type":"ContainerDied","Data":"29be0e83256f2fc2c54d7d29021386d10effea33def2950e2c278b149edec722"} Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.974898 4895 scope.go:117] "RemoveContainer" containerID="b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb" Jan 29 16:39:30 crc kubenswrapper[4895]: E0129 16:39:30.976155 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb\": container with ID starting with b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb not found: ID does not exist" containerID="b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb" Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.976230 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb"} err="failed to get container status \"b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb\": rpc error: code = NotFound desc = could not find container \"b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb\": container with ID starting with b04b488d8b07e06ce18a95c89defe210090cec9a142e374f2c911333210cbebb not found: ID does not exist" Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.987621 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kktng\" (UniqueName: \"kubernetes.io/projected/f3ac4bcf-7c2a-48e7-9921-3035bdc8f488-kube-api-access-kktng\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:30 crc kubenswrapper[4895]: I0129 16:39:30.998614 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.015891 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.034614 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:39:31 crc kubenswrapper[4895]: E0129 16:39:31.035042 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerName="extract-utilities" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035059 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerName="extract-utilities" Jan 29 16:39:31 crc kubenswrapper[4895]: E0129 16:39:31.035071 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ac4bcf-7c2a-48e7-9921-3035bdc8f488" containerName="kube-state-metrics" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035079 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ac4bcf-7c2a-48e7-9921-3035bdc8f488" containerName="kube-state-metrics" Jan 29 16:39:31 crc kubenswrapper[4895]: E0129 16:39:31.035100 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" containerName="registry-server" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035106 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" containerName="registry-server" Jan 29 16:39:31 crc kubenswrapper[4895]: E0129 16:39:31.035117 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerName="registry-server" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035124 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerName="registry-server" Jan 29 16:39:31 crc kubenswrapper[4895]: E0129 16:39:31.035146 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4daf356-0b44-4180-8e33-7b4048813006" containerName="registry-server" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035152 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4daf356-0b44-4180-8e33-7b4048813006" containerName="registry-server" Jan 29 16:39:31 crc kubenswrapper[4895]: E0129 16:39:31.035161 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" containerName="extract-content" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035167 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" containerName="extract-content" Jan 29 16:39:31 crc kubenswrapper[4895]: E0129 16:39:31.035178 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4daf356-0b44-4180-8e33-7b4048813006" containerName="extract-utilities" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035184 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4daf356-0b44-4180-8e33-7b4048813006" containerName="extract-utilities" Jan 29 16:39:31 crc kubenswrapper[4895]: E0129 16:39:31.035194 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4daf356-0b44-4180-8e33-7b4048813006" containerName="extract-content" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035201 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4daf356-0b44-4180-8e33-7b4048813006" containerName="extract-content" Jan 29 16:39:31 crc kubenswrapper[4895]: E0129 16:39:31.035208 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" containerName="extract-utilities" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035214 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" containerName="extract-utilities" Jan 29 16:39:31 crc kubenswrapper[4895]: E0129 16:39:31.035230 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerName="extract-content" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035237 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerName="extract-content" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035396 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2347a0ef-6f44-4f0b-a24d-639b38d71e12" containerName="registry-server" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035412 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3ac4bcf-7c2a-48e7-9921-3035bdc8f488" containerName="kube-state-metrics" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035426 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4daf356-0b44-4180-8e33-7b4048813006" containerName="registry-server" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.035442 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a5f19e-38ec-4e07-91f9-37abeda873f5" containerName="registry-server" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.038546 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.044582 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.044816 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.061786 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3ac4bcf-7c2a-48e7-9921-3035bdc8f488" path="/var/lib/kubelet/pods/f3ac4bcf-7c2a-48e7-9921-3035bdc8f488/volumes" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.062482 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.191764 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42l9z\" (UniqueName: \"kubernetes.io/projected/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-kube-api-access-42l9z\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.191822 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.191882 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.192078 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.293975 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42l9z\" (UniqueName: \"kubernetes.io/projected/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-kube-api-access-42l9z\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.294037 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.294068 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.294132 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.302580 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.302766 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.304592 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.313798 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42l9z\" (UniqueName: \"kubernetes.io/projected/008b84dd-8bf0-440a-bde9-4bbc0ab1b412-kube-api-access-42l9z\") pod \"kube-state-metrics-0\" (UID: \"008b84dd-8bf0-440a-bde9-4bbc0ab1b412\") " pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.362348 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.458142 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.458514 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="ceilometer-central-agent" containerID="cri-o://e6776baf98e3c74dcce2ea05f6257cb02d5d52590a5c6e16ea3f24463443188c" gracePeriod=30 Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.459026 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="proxy-httpd" containerID="cri-o://4ecee84e4bfc3958ad78a0525bc05bfa655bda4c8f82981bc7b48299b2362a23" gracePeriod=30 Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.459114 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="ceilometer-notification-agent" containerID="cri-o://f62f3fbd256ef4b3dee36f77c62c6e5568b0018163aed7d27a02e379a1114749" gracePeriod=30 Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.459243 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="sg-core" containerID="cri-o://1d7fdc0603e2a7da71090e3dc5aa4807fc75682ff8b9703284c36e36213d876a" gracePeriod=30 Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.832500 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.960183 4895 generic.go:334] "Generic (PLEG): container finished" podID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerID="4ecee84e4bfc3958ad78a0525bc05bfa655bda4c8f82981bc7b48299b2362a23" exitCode=0 Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.960222 4895 generic.go:334] "Generic (PLEG): container finished" podID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerID="1d7fdc0603e2a7da71090e3dc5aa4807fc75682ff8b9703284c36e36213d876a" exitCode=2 Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.960270 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b76d8a0a-9395-4b6c-8775-efa0354ace99","Type":"ContainerDied","Data":"4ecee84e4bfc3958ad78a0525bc05bfa655bda4c8f82981bc7b48299b2362a23"} Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.960302 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b76d8a0a-9395-4b6c-8775-efa0354ace99","Type":"ContainerDied","Data":"1d7fdc0603e2a7da71090e3dc5aa4807fc75682ff8b9703284c36e36213d876a"} Jan 29 16:39:31 crc kubenswrapper[4895]: I0129 16:39:31.966472 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"008b84dd-8bf0-440a-bde9-4bbc0ab1b412","Type":"ContainerStarted","Data":"fdb82a23e0d05250e36d5a36f4817fd8822b3b8d5d8efa35ce3c7c759b410e9e"} Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:32.998687 4895 generic.go:334] "Generic (PLEG): container finished" podID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerID="f62f3fbd256ef4b3dee36f77c62c6e5568b0018163aed7d27a02e379a1114749" exitCode=0 Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:32.999603 4895 generic.go:334] "Generic (PLEG): container finished" podID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerID="e6776baf98e3c74dcce2ea05f6257cb02d5d52590a5c6e16ea3f24463443188c" exitCode=0 Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:32.998858 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b76d8a0a-9395-4b6c-8775-efa0354ace99","Type":"ContainerDied","Data":"f62f3fbd256ef4b3dee36f77c62c6e5568b0018163aed7d27a02e379a1114749"} Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:32.999668 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b76d8a0a-9395-4b6c-8775-efa0354ace99","Type":"ContainerDied","Data":"e6776baf98e3c74dcce2ea05f6257cb02d5d52590a5c6e16ea3f24463443188c"} Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.258842 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.361567 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-sg-core-conf-yaml\") pod \"b76d8a0a-9395-4b6c-8775-efa0354ace99\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.361713 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-config-data\") pod \"b76d8a0a-9395-4b6c-8775-efa0354ace99\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.361885 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-run-httpd\") pod \"b76d8a0a-9395-4b6c-8775-efa0354ace99\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.361983 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-log-httpd\") pod \"b76d8a0a-9395-4b6c-8775-efa0354ace99\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.362026 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-combined-ca-bundle\") pod \"b76d8a0a-9395-4b6c-8775-efa0354ace99\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.362083 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-scripts\") pod \"b76d8a0a-9395-4b6c-8775-efa0354ace99\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.362134 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwvfz\" (UniqueName: \"kubernetes.io/projected/b76d8a0a-9395-4b6c-8775-efa0354ace99-kube-api-access-lwvfz\") pod \"b76d8a0a-9395-4b6c-8775-efa0354ace99\" (UID: \"b76d8a0a-9395-4b6c-8775-efa0354ace99\") " Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.362898 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b76d8a0a-9395-4b6c-8775-efa0354ace99" (UID: "b76d8a0a-9395-4b6c-8775-efa0354ace99"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.363236 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b76d8a0a-9395-4b6c-8775-efa0354ace99" (UID: "b76d8a0a-9395-4b6c-8775-efa0354ace99"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.375160 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-scripts" (OuterVolumeSpecName: "scripts") pod "b76d8a0a-9395-4b6c-8775-efa0354ace99" (UID: "b76d8a0a-9395-4b6c-8775-efa0354ace99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.377506 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76d8a0a-9395-4b6c-8775-efa0354ace99-kube-api-access-lwvfz" (OuterVolumeSpecName: "kube-api-access-lwvfz") pod "b76d8a0a-9395-4b6c-8775-efa0354ace99" (UID: "b76d8a0a-9395-4b6c-8775-efa0354ace99"). InnerVolumeSpecName "kube-api-access-lwvfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.397889 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b76d8a0a-9395-4b6c-8775-efa0354ace99" (UID: "b76d8a0a-9395-4b6c-8775-efa0354ace99"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.456854 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b76d8a0a-9395-4b6c-8775-efa0354ace99" (UID: "b76d8a0a-9395-4b6c-8775-efa0354ace99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.464337 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwvfz\" (UniqueName: \"kubernetes.io/projected/b76d8a0a-9395-4b6c-8775-efa0354ace99-kube-api-access-lwvfz\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.464363 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.464373 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.464383 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b76d8a0a-9395-4b6c-8775-efa0354ace99-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.464392 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.464400 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.484820 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-config-data" (OuterVolumeSpecName: "config-data") pod "b76d8a0a-9395-4b6c-8775-efa0354ace99" (UID: "b76d8a0a-9395-4b6c-8775-efa0354ace99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4895]: I0129 16:39:33.566543 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b76d8a0a-9395-4b6c-8775-efa0354ace99-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.013578 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b76d8a0a-9395-4b6c-8775-efa0354ace99","Type":"ContainerDied","Data":"a9c0c1347384641e76bb26099f8fd31927675168774a6c753aad92bd6e116c08"} Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.013651 4895 scope.go:117] "RemoveContainer" containerID="4ecee84e4bfc3958ad78a0525bc05bfa655bda4c8f82981bc7b48299b2362a23" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.013658 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.018243 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"008b84dd-8bf0-440a-bde9-4bbc0ab1b412","Type":"ContainerStarted","Data":"5d297b1e49352c0d6ac01ee6a8656e4a9ef36ef10641a0efa5aba03441b48604"} Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.018458 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.052547 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.255111284 podStartE2EDuration="3.052517574s" podCreationTimestamp="2026-01-29 16:39:31 +0000 UTC" firstStartedPulling="2026-01-29 16:39:31.856897627 +0000 UTC m=+1655.659874891" lastFinishedPulling="2026-01-29 16:39:32.654303897 +0000 UTC m=+1656.457281181" observedRunningTime="2026-01-29 16:39:34.037689251 +0000 UTC m=+1657.840666545" watchObservedRunningTime="2026-01-29 16:39:34.052517574 +0000 UTC m=+1657.855494848" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.057725 4895 scope.go:117] "RemoveContainer" containerID="1d7fdc0603e2a7da71090e3dc5aa4807fc75682ff8b9703284c36e36213d876a" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.066268 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.075693 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.077535 4895 scope.go:117] "RemoveContainer" containerID="f62f3fbd256ef4b3dee36f77c62c6e5568b0018163aed7d27a02e379a1114749" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.091175 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:39:34 crc kubenswrapper[4895]: E0129 16:39:34.091710 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="ceilometer-central-agent" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.091727 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="ceilometer-central-agent" Jan 29 16:39:34 crc kubenswrapper[4895]: E0129 16:39:34.091746 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="sg-core" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.091752 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="sg-core" Jan 29 16:39:34 crc kubenswrapper[4895]: E0129 16:39:34.092387 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="proxy-httpd" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.092412 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="proxy-httpd" Jan 29 16:39:34 crc kubenswrapper[4895]: E0129 16:39:34.092436 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="ceilometer-notification-agent" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.092443 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="ceilometer-notification-agent" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.092629 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="ceilometer-central-agent" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.092648 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="sg-core" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.092661 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="ceilometer-notification-agent" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.092674 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" containerName="proxy-httpd" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.094718 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.098161 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.098328 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.098454 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.105974 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.109334 4895 scope.go:117] "RemoveContainer" containerID="e6776baf98e3c74dcce2ea05f6257cb02d5d52590a5c6e16ea3f24463443188c" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.282410 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-scripts\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.282498 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.282533 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.282745 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-log-httpd\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.282807 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.282933 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-config-data\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.282958 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-run-httpd\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.283404 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nc5s\" (UniqueName: \"kubernetes.io/projected/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-kube-api-access-4nc5s\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.385653 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nc5s\" (UniqueName: \"kubernetes.io/projected/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-kube-api-access-4nc5s\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.385744 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-scripts\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.385789 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.385815 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.385858 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-log-httpd\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.385914 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.385987 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-run-httpd\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.386011 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-config-data\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.386618 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-run-httpd\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.386732 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-log-httpd\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.392400 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.392616 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.392636 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-scripts\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.393327 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.399073 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-config-data\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.405459 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nc5s\" (UniqueName: \"kubernetes.io/projected/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-kube-api-access-4nc5s\") pod \"ceilometer-0\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.418375 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:39:34 crc kubenswrapper[4895]: I0129 16:39:34.894170 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:39:35 crc kubenswrapper[4895]: I0129 16:39:35.031213 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5","Type":"ContainerStarted","Data":"5b5136d57b64cd1350daddad47e391c1a81264992a26331ee6078c47777f069a"} Jan 29 16:39:35 crc kubenswrapper[4895]: I0129 16:39:35.062951 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b76d8a0a-9395-4b6c-8775-efa0354ace99" path="/var/lib/kubelet/pods/b76d8a0a-9395-4b6c-8775-efa0354ace99/volumes" Jan 29 16:39:36 crc kubenswrapper[4895]: I0129 16:39:36.047658 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5","Type":"ContainerStarted","Data":"0916f4c8b5fef1c16cda0e0a7c9e21e24169b8f300b33e66a6f19bbe7420fa19"} Jan 29 16:39:37 crc kubenswrapper[4895]: I0129 16:39:37.064416 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5","Type":"ContainerStarted","Data":"acc83029a70f2cb503d0d85246be47849baa775380e49e55638e306b861fe414"} Jan 29 16:39:38 crc kubenswrapper[4895]: I0129 16:39:38.083287 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-49x8b"] Jan 29 16:39:38 crc kubenswrapper[4895]: I0129 16:39:38.085686 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5","Type":"ContainerStarted","Data":"3bafd5d9bdeba9997c1409b4780f74c5177c9c81eb08f66013eba43652dc631f"} Jan 29 16:39:38 crc kubenswrapper[4895]: I0129 16:39:38.095053 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-49x8b"] Jan 29 16:39:39 crc kubenswrapper[4895]: I0129 16:39:39.049161 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f35430-2cba-4af3-bffb-fe817ccdb2e2" path="/var/lib/kubelet/pods/89f35430-2cba-4af3-bffb-fe817ccdb2e2/volumes" Jan 29 16:39:39 crc kubenswrapper[4895]: I0129 16:39:39.148951 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:39:40 crc kubenswrapper[4895]: I0129 16:39:40.109511 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5","Type":"ContainerStarted","Data":"38da4e7eaf8beb1ea68be8c59c0506c5e1ddd6eb195901ef4dd58eb504b224dd"} Jan 29 16:39:40 crc kubenswrapper[4895]: I0129 16:39:40.111281 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:39:40 crc kubenswrapper[4895]: I0129 16:39:40.168401 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.227001397 podStartE2EDuration="6.168375456s" podCreationTimestamp="2026-01-29 16:39:34 +0000 UTC" firstStartedPulling="2026-01-29 16:39:34.902084128 +0000 UTC m=+1658.705061392" lastFinishedPulling="2026-01-29 16:39:38.843458187 +0000 UTC m=+1662.646435451" observedRunningTime="2026-01-29 16:39:40.157534242 +0000 UTC m=+1663.960511506" watchObservedRunningTime="2026-01-29 16:39:40.168375456 +0000 UTC m=+1663.971352720" Jan 29 16:39:40 crc kubenswrapper[4895]: I0129 16:39:40.388603 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:39:41 crc kubenswrapper[4895]: I0129 16:39:41.377085 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.051488 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d1f5-account-create-update-8vm9b"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.061900 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-baf0-account-create-update-m7shf"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.071303 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d1f5-account-create-update-8vm9b"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.080457 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-c4jml"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.089488 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-baf0-account-create-update-m7shf"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.099293 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-c4jml"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.110974 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4923-account-create-update-rvxh4"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.123403 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-wcksm"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.136045 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4923-account-create-update-rvxh4"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.146657 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-wcksm"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.162067 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7v9rk"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.168846 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7v9rk"] Jan 29 16:39:44 crc kubenswrapper[4895]: I0129 16:39:44.690766 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f23fdbdb-0285-4d43-b9bd-923b372eaf42" containerName="rabbitmq" containerID="cri-o://5fbbb5604144066b13f3f294e6d850ee393d3449bc20fb862307a8db580c2194" gracePeriod=604795 Jan 29 16:39:45 crc kubenswrapper[4895]: I0129 16:39:45.049904 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="188e6f67-6977-4709-bb3a-caf493bcc276" path="/var/lib/kubelet/pods/188e6f67-6977-4709-bb3a-caf493bcc276/volumes" Jan 29 16:39:45 crc kubenswrapper[4895]: I0129 16:39:45.050940 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34d38ae5-5cb9-47f3-88f0-818962fed6c1" path="/var/lib/kubelet/pods/34d38ae5-5cb9-47f3-88f0-818962fed6c1/volumes" Jan 29 16:39:45 crc kubenswrapper[4895]: I0129 16:39:45.051612 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c63c14-1712-4979-b5f0-abf5a4d4b72a" path="/var/lib/kubelet/pods/85c63c14-1712-4979-b5f0-abf5a4d4b72a/volumes" Jan 29 16:39:45 crc kubenswrapper[4895]: I0129 16:39:45.052349 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaad67d5-b9bb-42ae-befd-3b8765e8b760" path="/var/lib/kubelet/pods/eaad67d5-b9bb-42ae-befd-3b8765e8b760/volumes" Jan 29 16:39:45 crc kubenswrapper[4895]: I0129 16:39:45.053494 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d83bb5-2939-44e6-a0c5-8bf4893aebda" path="/var/lib/kubelet/pods/f2d83bb5-2939-44e6-a0c5-8bf4893aebda/volumes" Jan 29 16:39:45 crc kubenswrapper[4895]: I0129 16:39:45.054081 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea59417-b9ef-43f6-b2f1-76d15c6dd51b" path="/var/lib/kubelet/pods/fea59417-b9ef-43f6-b2f1-76d15c6dd51b/volumes" Jan 29 16:39:46 crc kubenswrapper[4895]: I0129 16:39:46.140430 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="c3729063-b6e8-4de8-9ab9-7448a3ec325a" containerName="rabbitmq" containerID="cri-o://0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82" gracePeriod=604795 Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.230742 4895 generic.go:334] "Generic (PLEG): container finished" podID="f23fdbdb-0285-4d43-b9bd-923b372eaf42" containerID="5fbbb5604144066b13f3f294e6d850ee393d3449bc20fb862307a8db580c2194" exitCode=0 Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.231310 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f23fdbdb-0285-4d43-b9bd-923b372eaf42","Type":"ContainerDied","Data":"5fbbb5604144066b13f3f294e6d850ee393d3449bc20fb862307a8db580c2194"} Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.407496 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.476650 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srw6v\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-kube-api-access-srw6v\") pod \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.476712 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-confd\") pod \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.476806 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f23fdbdb-0285-4d43-b9bd-923b372eaf42-erlang-cookie-secret\") pod \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.476838 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-plugins\") pod \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.477146 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-config-data\") pod \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.477174 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-plugins-conf\") pod \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.477204 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-tls\") pod \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.477260 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f23fdbdb-0285-4d43-b9bd-923b372eaf42-pod-info\") pod \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.477289 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-erlang-cookie\") pod \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.477347 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.477402 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-server-conf\") pod \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\" (UID: \"f23fdbdb-0285-4d43-b9bd-923b372eaf42\") " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.479998 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f23fdbdb-0285-4d43-b9bd-923b372eaf42" (UID: "f23fdbdb-0285-4d43-b9bd-923b372eaf42"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.483965 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f23fdbdb-0285-4d43-b9bd-923b372eaf42" (UID: "f23fdbdb-0285-4d43-b9bd-923b372eaf42"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.501099 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f23fdbdb-0285-4d43-b9bd-923b372eaf42" (UID: "f23fdbdb-0285-4d43-b9bd-923b372eaf42"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.503126 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "f23fdbdb-0285-4d43-b9bd-923b372eaf42" (UID: "f23fdbdb-0285-4d43-b9bd-923b372eaf42"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.505603 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f23fdbdb-0285-4d43-b9bd-923b372eaf42-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f23fdbdb-0285-4d43-b9bd-923b372eaf42" (UID: "f23fdbdb-0285-4d43-b9bd-923b372eaf42"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.508675 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f23fdbdb-0285-4d43-b9bd-923b372eaf42" (UID: "f23fdbdb-0285-4d43-b9bd-923b372eaf42"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.514280 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f23fdbdb-0285-4d43-b9bd-923b372eaf42-pod-info" (OuterVolumeSpecName: "pod-info") pod "f23fdbdb-0285-4d43-b9bd-923b372eaf42" (UID: "f23fdbdb-0285-4d43-b9bd-923b372eaf42"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.516298 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-kube-api-access-srw6v" (OuterVolumeSpecName: "kube-api-access-srw6v") pod "f23fdbdb-0285-4d43-b9bd-923b372eaf42" (UID: "f23fdbdb-0285-4d43-b9bd-923b372eaf42"). InnerVolumeSpecName "kube-api-access-srw6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.523518 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-config-data" (OuterVolumeSpecName: "config-data") pod "f23fdbdb-0285-4d43-b9bd-923b372eaf42" (UID: "f23fdbdb-0285-4d43-b9bd-923b372eaf42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.580870 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srw6v\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-kube-api-access-srw6v\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.580934 4895 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f23fdbdb-0285-4d43-b9bd-923b372eaf42-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.580944 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.580954 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.580963 4895 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.580982 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.580990 4895 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f23fdbdb-0285-4d43-b9bd-923b372eaf42-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.581000 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.581037 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.585687 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-server-conf" (OuterVolumeSpecName: "server-conf") pod "f23fdbdb-0285-4d43-b9bd-923b372eaf42" (UID: "f23fdbdb-0285-4d43-b9bd-923b372eaf42"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.603285 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.613820 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f23fdbdb-0285-4d43-b9bd-923b372eaf42" (UID: "f23fdbdb-0285-4d43-b9bd-923b372eaf42"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.682890 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.682934 4895 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f23fdbdb-0285-4d43-b9bd-923b372eaf42-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:51 crc kubenswrapper[4895]: I0129 16:39:51.682947 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f23fdbdb-0285-4d43-b9bd-923b372eaf42-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.244744 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f23fdbdb-0285-4d43-b9bd-923b372eaf42","Type":"ContainerDied","Data":"57ea66f1cb0eadb044aaa3b204203612ac7b0a6518ed4e90e6c7870a1fde3afb"} Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.245328 4895 scope.go:117] "RemoveContainer" containerID="5fbbb5604144066b13f3f294e6d850ee393d3449bc20fb862307a8db580c2194" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.245534 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.301561 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.302181 4895 scope.go:117] "RemoveContainer" containerID="703927b788d49dd2fbc7dcbeded873e6df74abef9151860b8f1949f49dd98c6a" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.311576 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.333294 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:39:52 crc kubenswrapper[4895]: E0129 16:39:52.333795 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23fdbdb-0285-4d43-b9bd-923b372eaf42" containerName="rabbitmq" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.333820 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23fdbdb-0285-4d43-b9bd-923b372eaf42" containerName="rabbitmq" Jan 29 16:39:52 crc kubenswrapper[4895]: E0129 16:39:52.333844 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23fdbdb-0285-4d43-b9bd-923b372eaf42" containerName="setup-container" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.333853 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23fdbdb-0285-4d43-b9bd-923b372eaf42" containerName="setup-container" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.334259 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23fdbdb-0285-4d43-b9bd-923b372eaf42" containerName="rabbitmq" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.335450 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.337672 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.339793 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.340012 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.340335 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9r9w4" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.340353 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.340554 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.341178 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.359515 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.509481 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d483294-14b5-4b14-8e09-e88d4d83a359-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.509582 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d483294-14b5-4b14-8e09-e88d4d83a359-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.509764 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.509945 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.510192 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.510247 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2qw\" (UniqueName: \"kubernetes.io/projected/6d483294-14b5-4b14-8e09-e88d4d83a359-kube-api-access-5l2qw\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.510313 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.510425 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.510575 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d483294-14b5-4b14-8e09-e88d4d83a359-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.510642 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d483294-14b5-4b14-8e09-e88d4d83a359-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.510826 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d483294-14b5-4b14-8e09-e88d4d83a359-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.613055 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.613124 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.613173 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.613191 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l2qw\" (UniqueName: \"kubernetes.io/projected/6d483294-14b5-4b14-8e09-e88d4d83a359-kube-api-access-5l2qw\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.613214 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.613243 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.613272 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d483294-14b5-4b14-8e09-e88d4d83a359-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.613292 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d483294-14b5-4b14-8e09-e88d4d83a359-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.613321 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d483294-14b5-4b14-8e09-e88d4d83a359-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.613360 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d483294-14b5-4b14-8e09-e88d4d83a359-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.613410 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d483294-14b5-4b14-8e09-e88d4d83a359-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.614477 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d483294-14b5-4b14-8e09-e88d4d83a359-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.614753 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.615168 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.615756 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.616126 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d483294-14b5-4b14-8e09-e88d4d83a359-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.616638 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d483294-14b5-4b14-8e09-e88d4d83a359-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.621691 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.624284 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d483294-14b5-4b14-8e09-e88d4d83a359-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.624512 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d483294-14b5-4b14-8e09-e88d4d83a359-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.628120 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d483294-14b5-4b14-8e09-e88d4d83a359-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.653704 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l2qw\" (UniqueName: \"kubernetes.io/projected/6d483294-14b5-4b14-8e09-e88d4d83a359-kube-api-access-5l2qw\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.659719 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"6d483294-14b5-4b14-8e09-e88d4d83a359\") " pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.730452 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.822798 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.918787 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3729063-b6e8-4de8-9ab9-7448a3ec325a-pod-info\") pod \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.918836 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb46f\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-kube-api-access-nb46f\") pod \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.918891 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.918920 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-plugins\") pod \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.919035 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-server-conf\") pod \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.919066 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-erlang-cookie\") pod \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.919128 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-plugins-conf\") pod \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.919178 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-config-data\") pod \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.919211 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-tls\") pod \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.919274 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-confd\") pod \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.919304 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3729063-b6e8-4de8-9ab9-7448a3ec325a-erlang-cookie-secret\") pod \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\" (UID: \"c3729063-b6e8-4de8-9ab9-7448a3ec325a\") " Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.920390 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c3729063-b6e8-4de8-9ab9-7448a3ec325a" (UID: "c3729063-b6e8-4de8-9ab9-7448a3ec325a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.921374 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c3729063-b6e8-4de8-9ab9-7448a3ec325a" (UID: "c3729063-b6e8-4de8-9ab9-7448a3ec325a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.922001 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c3729063-b6e8-4de8-9ab9-7448a3ec325a" (UID: "c3729063-b6e8-4de8-9ab9-7448a3ec325a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.925137 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3729063-b6e8-4de8-9ab9-7448a3ec325a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c3729063-b6e8-4de8-9ab9-7448a3ec325a" (UID: "c3729063-b6e8-4de8-9ab9-7448a3ec325a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.925456 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c3729063-b6e8-4de8-9ab9-7448a3ec325a-pod-info" (OuterVolumeSpecName: "pod-info") pod "c3729063-b6e8-4de8-9ab9-7448a3ec325a" (UID: "c3729063-b6e8-4de8-9ab9-7448a3ec325a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.929102 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-kube-api-access-nb46f" (OuterVolumeSpecName: "kube-api-access-nb46f") pod "c3729063-b6e8-4de8-9ab9-7448a3ec325a" (UID: "c3729063-b6e8-4de8-9ab9-7448a3ec325a"). InnerVolumeSpecName "kube-api-access-nb46f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.929152 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c3729063-b6e8-4de8-9ab9-7448a3ec325a" (UID: "c3729063-b6e8-4de8-9ab9-7448a3ec325a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.930314 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "c3729063-b6e8-4de8-9ab9-7448a3ec325a" (UID: "c3729063-b6e8-4de8-9ab9-7448a3ec325a"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:39:52 crc kubenswrapper[4895]: I0129 16:39:52.957178 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-config-data" (OuterVolumeSpecName: "config-data") pod "c3729063-b6e8-4de8-9ab9-7448a3ec325a" (UID: "c3729063-b6e8-4de8-9ab9-7448a3ec325a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.021722 4895 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3729063-b6e8-4de8-9ab9-7448a3ec325a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.021763 4895 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3729063-b6e8-4de8-9ab9-7448a3ec325a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.021775 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb46f\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-kube-api-access-nb46f\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.021814 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.021826 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.021836 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.021845 4895 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.021854 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.021881 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.022548 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-server-conf" (OuterVolumeSpecName: "server-conf") pod "c3729063-b6e8-4de8-9ab9-7448a3ec325a" (UID: "c3729063-b6e8-4de8-9ab9-7448a3ec325a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.049796 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f23fdbdb-0285-4d43-b9bd-923b372eaf42" path="/var/lib/kubelet/pods/f23fdbdb-0285-4d43-b9bd-923b372eaf42/volumes" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.057294 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.076851 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c3729063-b6e8-4de8-9ab9-7448a3ec325a" (UID: "c3729063-b6e8-4de8-9ab9-7448a3ec325a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.124570 4895 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3729063-b6e8-4de8-9ab9-7448a3ec325a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.124668 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.124684 4895 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3729063-b6e8-4de8-9ab9-7448a3ec325a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.262129 4895 generic.go:334] "Generic (PLEG): container finished" podID="c3729063-b6e8-4de8-9ab9-7448a3ec325a" containerID="0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82" exitCode=0 Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.262180 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3729063-b6e8-4de8-9ab9-7448a3ec325a","Type":"ContainerDied","Data":"0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82"} Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.262211 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3729063-b6e8-4de8-9ab9-7448a3ec325a","Type":"ContainerDied","Data":"ad02378dc6116e496fbf7b72c9f5b5fbc857f24e8fc460c1d352e028213ce12a"} Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.262234 4895 scope.go:117] "RemoveContainer" containerID="0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.262379 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.293321 4895 scope.go:117] "RemoveContainer" containerID="dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.306243 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.345186 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.359536 4895 scope.go:117] "RemoveContainer" containerID="0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.367717 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:39:53 crc kubenswrapper[4895]: E0129 16:39:53.369164 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82\": container with ID starting with 0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82 not found: ID does not exist" containerID="0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.369248 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82"} err="failed to get container status \"0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82\": rpc error: code = NotFound desc = could not find container \"0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82\": container with ID starting with 0fb701cc98ec7ad1ffd4310469ff352bb2ccef6bffbfcdb06b1ac5f45273cc82 not found: ID does not exist" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.369289 4895 scope.go:117] "RemoveContainer" containerID="dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8" Jan 29 16:39:53 crc kubenswrapper[4895]: E0129 16:39:53.370675 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8\": container with ID starting with dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8 not found: ID does not exist" containerID="dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.370744 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8"} err="failed to get container status \"dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8\": rpc error: code = NotFound desc = could not find container \"dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8\": container with ID starting with dedd8cbb1a70735f89c5ebe0a75831ee00fddc22aea7f9aced27f2b11b89c2a8 not found: ID does not exist" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.393790 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:39:53 crc kubenswrapper[4895]: E0129 16:39:53.394559 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3729063-b6e8-4de8-9ab9-7448a3ec325a" containerName="rabbitmq" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.394587 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3729063-b6e8-4de8-9ab9-7448a3ec325a" containerName="rabbitmq" Jan 29 16:39:53 crc kubenswrapper[4895]: E0129 16:39:53.394615 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3729063-b6e8-4de8-9ab9-7448a3ec325a" containerName="setup-container" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.394675 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3729063-b6e8-4de8-9ab9-7448a3ec325a" containerName="setup-container" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.395156 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3729063-b6e8-4de8-9ab9-7448a3ec325a" containerName="rabbitmq" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.396557 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.400459 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.400581 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.400762 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.400994 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.401397 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-w6b6m" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.401566 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.401594 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.410059 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.563618 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.564410 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e928bf68-d1d2-4d90-b479-f589568e5145-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.564451 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e928bf68-d1d2-4d90-b479-f589568e5145-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.564484 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.564916 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.564952 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.564982 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e928bf68-d1d2-4d90-b479-f589568e5145-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.565004 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.565034 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e928bf68-d1d2-4d90-b479-f589568e5145-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.565060 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6p4n\" (UniqueName: \"kubernetes.io/projected/e928bf68-d1d2-4d90-b479-f589568e5145-kube-api-access-x6p4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.565140 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e928bf68-d1d2-4d90-b479-f589568e5145-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.666733 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e928bf68-d1d2-4d90-b479-f589568e5145-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.666823 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.666845 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e928bf68-d1d2-4d90-b479-f589568e5145-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.666892 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e928bf68-d1d2-4d90-b479-f589568e5145-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.666924 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.667006 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.667029 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.667048 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e928bf68-d1d2-4d90-b479-f589568e5145-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.667068 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.667091 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e928bf68-d1d2-4d90-b479-f589568e5145-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.667112 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6p4n\" (UniqueName: \"kubernetes.io/projected/e928bf68-d1d2-4d90-b479-f589568e5145-kube-api-access-x6p4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.667512 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.667528 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.668038 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e928bf68-d1d2-4d90-b479-f589568e5145-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.668255 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e928bf68-d1d2-4d90-b479-f589568e5145-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.668417 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.668672 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e928bf68-d1d2-4d90-b479-f589568e5145-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.678091 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.686819 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e928bf68-d1d2-4d90-b479-f589568e5145-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.691651 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e928bf68-d1d2-4d90-b479-f589568e5145-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.699707 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6p4n\" (UniqueName: \"kubernetes.io/projected/e928bf68-d1d2-4d90-b479-f589568e5145-kube-api-access-x6p4n\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.712287 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e928bf68-d1d2-4d90-b479-f589568e5145-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.716276 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e928bf68-d1d2-4d90-b479-f589568e5145\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:53 crc kubenswrapper[4895]: I0129 16:39:53.727595 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:39:54 crc kubenswrapper[4895]: I0129 16:39:54.268748 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:39:54 crc kubenswrapper[4895]: I0129 16:39:54.289254 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d483294-14b5-4b14-8e09-e88d4d83a359","Type":"ContainerStarted","Data":"8c9160a023a46bd1fab6b9838b5efe9b7ed341016b20a7e99c7a43ef3d99b729"} Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.050497 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3729063-b6e8-4de8-9ab9-7448a3ec325a" path="/var/lib/kubelet/pods/c3729063-b6e8-4de8-9ab9-7448a3ec325a/volumes" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.176236 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-9hwpz"] Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.177962 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.181050 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.193592 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-9hwpz"] Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.304225 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-config\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.304622 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.304662 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cphh\" (UniqueName: \"kubernetes.io/projected/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-kube-api-access-7cphh\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.304910 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.305174 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.305269 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.314477 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e928bf68-d1d2-4d90-b479-f589568e5145","Type":"ContainerStarted","Data":"5665651d2ee25e7d1bd3c0ab32772615a22c903a44c1dab309b0f92413d6d704"} Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.407849 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.407950 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cphh\" (UniqueName: \"kubernetes.io/projected/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-kube-api-access-7cphh\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.407985 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.408027 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.408051 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.408134 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-config\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.409022 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-config\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.409201 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.409626 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.409921 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.410333 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.437477 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cphh\" (UniqueName: \"kubernetes.io/projected/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-kube-api-access-7cphh\") pod \"dnsmasq-dns-578b8d767c-9hwpz\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:55 crc kubenswrapper[4895]: I0129 16:39:55.511136 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:56 crc kubenswrapper[4895]: I0129 16:39:56.012917 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-9hwpz"] Jan 29 16:39:56 crc kubenswrapper[4895]: I0129 16:39:56.045162 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-2lh75"] Jan 29 16:39:56 crc kubenswrapper[4895]: I0129 16:39:56.053583 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-2lh75"] Jan 29 16:39:56 crc kubenswrapper[4895]: I0129 16:39:56.325530 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" event={"ID":"c2c73d2e-294e-4a67-ad3b-d78e98dee95e","Type":"ContainerStarted","Data":"b26d39c6ce87215507cf3d49cd078367c86959b2cf4102377a3a7cdff6efb7ad"} Jan 29 16:39:56 crc kubenswrapper[4895]: I0129 16:39:56.327939 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d483294-14b5-4b14-8e09-e88d4d83a359","Type":"ContainerStarted","Data":"264882f64b463f8304f26b3bd4b3f52e66bdc82a4f7f16362d32f61fe4900221"} Jan 29 16:39:57 crc kubenswrapper[4895]: I0129 16:39:57.047261 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa46325-18c8-48b6-bfe2-d5492f6a5998" path="/var/lib/kubelet/pods/1fa46325-18c8-48b6-bfe2-d5492f6a5998/volumes" Jan 29 16:39:57 crc kubenswrapper[4895]: I0129 16:39:57.338807 4895 generic.go:334] "Generic (PLEG): container finished" podID="c2c73d2e-294e-4a67-ad3b-d78e98dee95e" containerID="5970e6969a87ee89af19e58c49224a72747f4ce5e5a91d3df4858f7a58cd7cda" exitCode=0 Jan 29 16:39:57 crc kubenswrapper[4895]: I0129 16:39:57.339040 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" event={"ID":"c2c73d2e-294e-4a67-ad3b-d78e98dee95e","Type":"ContainerDied","Data":"5970e6969a87ee89af19e58c49224a72747f4ce5e5a91d3df4858f7a58cd7cda"} Jan 29 16:39:57 crc kubenswrapper[4895]: I0129 16:39:57.341787 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e928bf68-d1d2-4d90-b479-f589568e5145","Type":"ContainerStarted","Data":"55cb6b21c0c00659e3125d0ebc78a5ce374b8f48ee3ab00facd996a77c627eb5"} Jan 29 16:39:57 crc kubenswrapper[4895]: I0129 16:39:57.823939 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:39:57 crc kubenswrapper[4895]: I0129 16:39:57.824029 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:39:57 crc kubenswrapper[4895]: I0129 16:39:57.824088 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:39:57 crc kubenswrapper[4895]: I0129 16:39:57.825026 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:39:57 crc kubenswrapper[4895]: I0129 16:39:57.825092 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" gracePeriod=600 Jan 29 16:39:57 crc kubenswrapper[4895]: E0129 16:39:57.949575 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:39:58 crc kubenswrapper[4895]: I0129 16:39:58.357123 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" exitCode=0 Jan 29 16:39:58 crc kubenswrapper[4895]: I0129 16:39:58.357192 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230"} Jan 29 16:39:58 crc kubenswrapper[4895]: I0129 16:39:58.357288 4895 scope.go:117] "RemoveContainer" containerID="ba3dd6b954350bf38e8b9f1effc919dbdd8be56496986ff2037f29d7f2db3c91" Jan 29 16:39:58 crc kubenswrapper[4895]: I0129 16:39:58.358139 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:39:58 crc kubenswrapper[4895]: E0129 16:39:58.358436 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:39:58 crc kubenswrapper[4895]: I0129 16:39:58.360135 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" event={"ID":"c2c73d2e-294e-4a67-ad3b-d78e98dee95e","Type":"ContainerStarted","Data":"a35d50176958148c229c9c789b1692e4f0e88c8c74f5addf3911b64839d17925"} Jan 29 16:39:58 crc kubenswrapper[4895]: I0129 16:39:58.360310 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:39:58 crc kubenswrapper[4895]: I0129 16:39:58.410143 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" podStartSLOduration=3.410114832 podStartE2EDuration="3.410114832s" podCreationTimestamp="2026-01-29 16:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:39:58.398890177 +0000 UTC m=+1682.201867461" watchObservedRunningTime="2026-01-29 16:39:58.410114832 +0000 UTC m=+1682.213092106" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.062977 4895 scope.go:117] "RemoveContainer" containerID="5ba5e726ef40424a2590c4cf387ed6924800b9c6572ed7b532884cd2f5ca1850" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.090112 4895 scope.go:117] "RemoveContainer" containerID="af293e0beaeecc17d4c902ea4388e1cfadca2da9982d9567189452e4167228e5" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.154625 4895 scope.go:117] "RemoveContainer" containerID="96ba3c6c7ab1a879aafe99cefc5574d28a6bb6356600144edcfa05efcac58319" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.186111 4895 scope.go:117] "RemoveContainer" containerID="0cf4de0b939eb3da07e6f227feda677476d1a85387acde0c8d0de5d9308e3dee" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.227245 4895 scope.go:117] "RemoveContainer" containerID="801c43d969da3d1ac44900d3047685e2e30ec75a3944a0f30bac8d0cdcdc9869" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.273149 4895 scope.go:117] "RemoveContainer" containerID="503eea865f11ffafa3cceb35f6b0f21a7488b621da2438b2120604970189eaeb" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.312318 4895 scope.go:117] "RemoveContainer" containerID="8b18606ccedc201a7d9d72891c392c0e9a525f1d479a63b65df3aa21c2b773a4" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.342411 4895 scope.go:117] "RemoveContainer" containerID="daef435e75b258e279d6f7199d5e890adbd4b7a620a073b109cb7e47e35237d1" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.372649 4895 scope.go:117] "RemoveContainer" containerID="16dda0fc19a6b3288e716f995eaccc2d29556874949da98b3d297625b59fc787" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.395976 4895 scope.go:117] "RemoveContainer" containerID="30e18be2da1743fb171f3dd20e7563947d2ce3b53278ba7fe5d84cae0464ff2c" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.420680 4895 scope.go:117] "RemoveContainer" containerID="4ada251d049c70aba74db1a7c382aeb4212e0f56bcf02401ffd588fc110c5f64" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.447103 4895 scope.go:117] "RemoveContainer" containerID="a70b605869b9af1074089e250447357707e74034b3c91ebb37f1930f827bb3bb" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.478083 4895 scope.go:117] "RemoveContainer" containerID="8accabf60fcc4324db39482f86e7b1476cec6ce880494834f5da8d22ea5585f1" Jan 29 16:39:59 crc kubenswrapper[4895]: I0129 16:39:59.509621 4895 scope.go:117] "RemoveContainer" containerID="02d80f8ae3938ea0b74c7adbf5023039a023eaa4e6b87b1b50117111b5ed61d9" Jan 29 16:40:00 crc kubenswrapper[4895]: I0129 16:40:00.047505 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7pr4f"] Jan 29 16:40:00 crc kubenswrapper[4895]: I0129 16:40:00.060219 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7pr4f"] Jan 29 16:40:01 crc kubenswrapper[4895]: I0129 16:40:01.049340 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd10e751-7bed-464f-a755-a183b5ed4412" path="/var/lib/kubelet/pods/cd10e751-7bed-464f-a755-a183b5ed4412/volumes" Jan 29 16:40:04 crc kubenswrapper[4895]: I0129 16:40:04.433393 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.513164 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.595515 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-4tnv5"] Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.595929 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" podUID="2434c3d1-86c7-4c7b-b431-c799de0dadd2" containerName="dnsmasq-dns" containerID="cri-o://878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1" gracePeriod=10 Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.833945 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-rxb7m"] Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.835749 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.847713 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvg2k\" (UniqueName: \"kubernetes.io/projected/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-kube-api-access-lvg2k\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.848397 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-config\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.848419 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.848447 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.848608 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.848697 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.855312 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-rxb7m"] Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.949224 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvg2k\" (UniqueName: \"kubernetes.io/projected/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-kube-api-access-lvg2k\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.949315 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-config\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.949338 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.949360 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.949400 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.949431 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.950369 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-config\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.950489 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.950495 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.951106 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.951101 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:05 crc kubenswrapper[4895]: I0129 16:40:05.972388 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvg2k\" (UniqueName: \"kubernetes.io/projected/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-kube-api-access-lvg2k\") pod \"dnsmasq-dns-fbc59fbb7-rxb7m\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.167071 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.188667 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.255971 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-nb\") pod \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.256049 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-dns-svc\") pod \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.256125 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-sb\") pod \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.256159 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-config\") pod \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.256216 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bppck\" (UniqueName: \"kubernetes.io/projected/2434c3d1-86c7-4c7b-b431-c799de0dadd2-kube-api-access-bppck\") pod \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\" (UID: \"2434c3d1-86c7-4c7b-b431-c799de0dadd2\") " Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.261223 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2434c3d1-86c7-4c7b-b431-c799de0dadd2-kube-api-access-bppck" (OuterVolumeSpecName: "kube-api-access-bppck") pod "2434c3d1-86c7-4c7b-b431-c799de0dadd2" (UID: "2434c3d1-86c7-4c7b-b431-c799de0dadd2"). InnerVolumeSpecName "kube-api-access-bppck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.351070 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-config" (OuterVolumeSpecName: "config") pod "2434c3d1-86c7-4c7b-b431-c799de0dadd2" (UID: "2434c3d1-86c7-4c7b-b431-c799de0dadd2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.363473 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.363538 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bppck\" (UniqueName: \"kubernetes.io/projected/2434c3d1-86c7-4c7b-b431-c799de0dadd2-kube-api-access-bppck\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.363989 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2434c3d1-86c7-4c7b-b431-c799de0dadd2" (UID: "2434c3d1-86c7-4c7b-b431-c799de0dadd2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.383532 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2434c3d1-86c7-4c7b-b431-c799de0dadd2" (UID: "2434c3d1-86c7-4c7b-b431-c799de0dadd2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.400425 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2434c3d1-86c7-4c7b-b431-c799de0dadd2" (UID: "2434c3d1-86c7-4c7b-b431-c799de0dadd2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.467282 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.467313 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.467325 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2434c3d1-86c7-4c7b-b431-c799de0dadd2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.474007 4895 generic.go:334] "Generic (PLEG): container finished" podID="2434c3d1-86c7-4c7b-b431-c799de0dadd2" containerID="878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1" exitCode=0 Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.474062 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" event={"ID":"2434c3d1-86c7-4c7b-b431-c799de0dadd2","Type":"ContainerDied","Data":"878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1"} Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.474094 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.477451 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-4tnv5" event={"ID":"2434c3d1-86c7-4c7b-b431-c799de0dadd2","Type":"ContainerDied","Data":"b9f2612a27de4c2ce7116a34767744d97b6c50fc2475a074b745d1ab4c2246ec"} Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.477518 4895 scope.go:117] "RemoveContainer" containerID="878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.497204 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-rxb7m"] Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.535094 4895 scope.go:117] "RemoveContainer" containerID="fec41637012fd5bdb150fbbeab8acee8ed14a667c56a2c9868d4d58489f19bb8" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.550107 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-4tnv5"] Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.559671 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-4tnv5"] Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.561453 4895 scope.go:117] "RemoveContainer" containerID="878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1" Jan 29 16:40:06 crc kubenswrapper[4895]: E0129 16:40:06.561953 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1\": container with ID starting with 878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1 not found: ID does not exist" containerID="878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.562000 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1"} err="failed to get container status \"878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1\": rpc error: code = NotFound desc = could not find container \"878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1\": container with ID starting with 878cae2d5ebf2508fbcccff18dba27f12214c8b1a42827e6f5b351da35a2c0e1 not found: ID does not exist" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.562031 4895 scope.go:117] "RemoveContainer" containerID="fec41637012fd5bdb150fbbeab8acee8ed14a667c56a2c9868d4d58489f19bb8" Jan 29 16:40:06 crc kubenswrapper[4895]: E0129 16:40:06.562430 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec41637012fd5bdb150fbbeab8acee8ed14a667c56a2c9868d4d58489f19bb8\": container with ID starting with fec41637012fd5bdb150fbbeab8acee8ed14a667c56a2c9868d4d58489f19bb8 not found: ID does not exist" containerID="fec41637012fd5bdb150fbbeab8acee8ed14a667c56a2c9868d4d58489f19bb8" Jan 29 16:40:06 crc kubenswrapper[4895]: I0129 16:40:06.562454 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec41637012fd5bdb150fbbeab8acee8ed14a667c56a2c9868d4d58489f19bb8"} err="failed to get container status \"fec41637012fd5bdb150fbbeab8acee8ed14a667c56a2c9868d4d58489f19bb8\": rpc error: code = NotFound desc = could not find container \"fec41637012fd5bdb150fbbeab8acee8ed14a667c56a2c9868d4d58489f19bb8\": container with ID starting with fec41637012fd5bdb150fbbeab8acee8ed14a667c56a2c9868d4d58489f19bb8 not found: ID does not exist" Jan 29 16:40:07 crc kubenswrapper[4895]: I0129 16:40:07.053899 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2434c3d1-86c7-4c7b-b431-c799de0dadd2" path="/var/lib/kubelet/pods/2434c3d1-86c7-4c7b-b431-c799de0dadd2/volumes" Jan 29 16:40:07 crc kubenswrapper[4895]: I0129 16:40:07.499247 4895 generic.go:334] "Generic (PLEG): container finished" podID="27cd5ecf-82d4-4495-8e11-7ae1b73e6506" containerID="656b75a73b476dd50682d06760de0c8ef57df7303f790d4b1e44dcbe9a65f91e" exitCode=0 Jan 29 16:40:07 crc kubenswrapper[4895]: I0129 16:40:07.499294 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" event={"ID":"27cd5ecf-82d4-4495-8e11-7ae1b73e6506","Type":"ContainerDied","Data":"656b75a73b476dd50682d06760de0c8ef57df7303f790d4b1e44dcbe9a65f91e"} Jan 29 16:40:07 crc kubenswrapper[4895]: I0129 16:40:07.499319 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" event={"ID":"27cd5ecf-82d4-4495-8e11-7ae1b73e6506","Type":"ContainerStarted","Data":"cda415dccd1e1c2dc355baab18bfc7e9d472d9ffbfa91742fc36d1dc04075a34"} Jan 29 16:40:08 crc kubenswrapper[4895]: I0129 16:40:08.513441 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" event={"ID":"27cd5ecf-82d4-4495-8e11-7ae1b73e6506","Type":"ContainerStarted","Data":"397a7186e9f85393d3bcaa7cc82cae0af0174d2af02353d34147c282271e8b91"} Jan 29 16:40:08 crc kubenswrapper[4895]: I0129 16:40:08.513997 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:08 crc kubenswrapper[4895]: I0129 16:40:08.550993 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" podStartSLOduration=3.550957691 podStartE2EDuration="3.550957691s" podCreationTimestamp="2026-01-29 16:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:08.53431471 +0000 UTC m=+1692.337291984" watchObservedRunningTime="2026-01-29 16:40:08.550957691 +0000 UTC m=+1692.353934985" Jan 29 16:40:10 crc kubenswrapper[4895]: I0129 16:40:10.037476 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:40:10 crc kubenswrapper[4895]: E0129 16:40:10.038327 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.170251 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.258665 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-9hwpz"] Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.259424 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" podUID="c2c73d2e-294e-4a67-ad3b-d78e98dee95e" containerName="dnsmasq-dns" containerID="cri-o://a35d50176958148c229c9c789b1692e4f0e88c8c74f5addf3911b64839d17925" gracePeriod=10 Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.644032 4895 generic.go:334] "Generic (PLEG): container finished" podID="c2c73d2e-294e-4a67-ad3b-d78e98dee95e" containerID="a35d50176958148c229c9c789b1692e4f0e88c8c74f5addf3911b64839d17925" exitCode=0 Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.644093 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" event={"ID":"c2c73d2e-294e-4a67-ad3b-d78e98dee95e","Type":"ContainerDied","Data":"a35d50176958148c229c9c789b1692e4f0e88c8c74f5addf3911b64839d17925"} Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.796213 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.825666 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cphh\" (UniqueName: \"kubernetes.io/projected/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-kube-api-access-7cphh\") pod \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.825782 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-config\") pod \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.826169 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-sb\") pod \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.826251 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-dns-svc\") pod \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.826352 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-openstack-edpm-ipam\") pod \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.826401 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-nb\") pod \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\" (UID: \"c2c73d2e-294e-4a67-ad3b-d78e98dee95e\") " Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.850586 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-kube-api-access-7cphh" (OuterVolumeSpecName: "kube-api-access-7cphh") pod "c2c73d2e-294e-4a67-ad3b-d78e98dee95e" (UID: "c2c73d2e-294e-4a67-ad3b-d78e98dee95e"). InnerVolumeSpecName "kube-api-access-7cphh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.904986 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2c73d2e-294e-4a67-ad3b-d78e98dee95e" (UID: "c2c73d2e-294e-4a67-ad3b-d78e98dee95e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.911853 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2c73d2e-294e-4a67-ad3b-d78e98dee95e" (UID: "c2c73d2e-294e-4a67-ad3b-d78e98dee95e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.915494 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c2c73d2e-294e-4a67-ad3b-d78e98dee95e" (UID: "c2c73d2e-294e-4a67-ad3b-d78e98dee95e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.928666 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-config" (OuterVolumeSpecName: "config") pod "c2c73d2e-294e-4a67-ad3b-d78e98dee95e" (UID: "c2c73d2e-294e-4a67-ad3b-d78e98dee95e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.929182 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.929219 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.929235 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cphh\" (UniqueName: \"kubernetes.io/projected/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-kube-api-access-7cphh\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.929251 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.929261 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:16 crc kubenswrapper[4895]: I0129 16:40:16.950093 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2c73d2e-294e-4a67-ad3b-d78e98dee95e" (UID: "c2c73d2e-294e-4a67-ad3b-d78e98dee95e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:17 crc kubenswrapper[4895]: I0129 16:40:17.030547 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2c73d2e-294e-4a67-ad3b-d78e98dee95e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:17 crc kubenswrapper[4895]: I0129 16:40:17.656972 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" event={"ID":"c2c73d2e-294e-4a67-ad3b-d78e98dee95e","Type":"ContainerDied","Data":"b26d39c6ce87215507cf3d49cd078367c86959b2cf4102377a3a7cdff6efb7ad"} Jan 29 16:40:17 crc kubenswrapper[4895]: I0129 16:40:17.657038 4895 scope.go:117] "RemoveContainer" containerID="a35d50176958148c229c9c789b1692e4f0e88c8c74f5addf3911b64839d17925" Jan 29 16:40:17 crc kubenswrapper[4895]: I0129 16:40:17.657040 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-9hwpz" Jan 29 16:40:17 crc kubenswrapper[4895]: I0129 16:40:17.685970 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-9hwpz"] Jan 29 16:40:17 crc kubenswrapper[4895]: I0129 16:40:17.687963 4895 scope.go:117] "RemoveContainer" containerID="5970e6969a87ee89af19e58c49224a72747f4ce5e5a91d3df4858f7a58cd7cda" Jan 29 16:40:17 crc kubenswrapper[4895]: I0129 16:40:17.694524 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-9hwpz"] Jan 29 16:40:19 crc kubenswrapper[4895]: I0129 16:40:19.052584 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c73d2e-294e-4a67-ad3b-d78e98dee95e" path="/var/lib/kubelet/pods/c2c73d2e-294e-4a67-ad3b-d78e98dee95e/volumes" Jan 29 16:40:24 crc kubenswrapper[4895]: I0129 16:40:24.036962 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:40:24 crc kubenswrapper[4895]: E0129 16:40:24.038273 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.586292 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk"] Jan 29 16:40:26 crc kubenswrapper[4895]: E0129 16:40:26.587117 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2434c3d1-86c7-4c7b-b431-c799de0dadd2" containerName="dnsmasq-dns" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.587132 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2434c3d1-86c7-4c7b-b431-c799de0dadd2" containerName="dnsmasq-dns" Jan 29 16:40:26 crc kubenswrapper[4895]: E0129 16:40:26.587157 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c73d2e-294e-4a67-ad3b-d78e98dee95e" containerName="init" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.587165 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c73d2e-294e-4a67-ad3b-d78e98dee95e" containerName="init" Jan 29 16:40:26 crc kubenswrapper[4895]: E0129 16:40:26.587194 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2434c3d1-86c7-4c7b-b431-c799de0dadd2" containerName="init" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.587201 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2434c3d1-86c7-4c7b-b431-c799de0dadd2" containerName="init" Jan 29 16:40:26 crc kubenswrapper[4895]: E0129 16:40:26.587211 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c73d2e-294e-4a67-ad3b-d78e98dee95e" containerName="dnsmasq-dns" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.587217 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c73d2e-294e-4a67-ad3b-d78e98dee95e" containerName="dnsmasq-dns" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.587392 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c73d2e-294e-4a67-ad3b-d78e98dee95e" containerName="dnsmasq-dns" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.587405 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2434c3d1-86c7-4c7b-b431-c799de0dadd2" containerName="dnsmasq-dns" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.588065 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.592314 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.595198 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.596017 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.599143 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.612534 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk"] Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.637221 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.637633 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.637792 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lwm6\" (UniqueName: \"kubernetes.io/projected/c181aaa5-19e0-4d8b-807b-b494677ec871-kube-api-access-7lwm6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.637992 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.738719 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.738813 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.738881 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.738915 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lwm6\" (UniqueName: \"kubernetes.io/projected/c181aaa5-19e0-4d8b-807b-b494677ec871-kube-api-access-7lwm6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.747626 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.747698 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.748210 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.759609 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lwm6\" (UniqueName: \"kubernetes.io/projected/c181aaa5-19e0-4d8b-807b-b494677ec871-kube-api-access-7lwm6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djthk\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:26 crc kubenswrapper[4895]: I0129 16:40:26.917376 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:27 crc kubenswrapper[4895]: I0129 16:40:27.267099 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk"] Jan 29 16:40:27 crc kubenswrapper[4895]: W0129 16:40:27.270393 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc181aaa5_19e0_4d8b_807b_b494677ec871.slice/crio-7d9dd6c44cb30d18b02a7309e8541ddde5c25a2216f9420bc374443dd2b487fe WatchSource:0}: Error finding container 7d9dd6c44cb30d18b02a7309e8541ddde5c25a2216f9420bc374443dd2b487fe: Status 404 returned error can't find the container with id 7d9dd6c44cb30d18b02a7309e8541ddde5c25a2216f9420bc374443dd2b487fe Jan 29 16:40:27 crc kubenswrapper[4895]: I0129 16:40:27.791225 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" event={"ID":"c181aaa5-19e0-4d8b-807b-b494677ec871","Type":"ContainerStarted","Data":"7d9dd6c44cb30d18b02a7309e8541ddde5c25a2216f9420bc374443dd2b487fe"} Jan 29 16:40:27 crc kubenswrapper[4895]: I0129 16:40:27.795226 4895 generic.go:334] "Generic (PLEG): container finished" podID="6d483294-14b5-4b14-8e09-e88d4d83a359" containerID="264882f64b463f8304f26b3bd4b3f52e66bdc82a4f7f16362d32f61fe4900221" exitCode=0 Jan 29 16:40:27 crc kubenswrapper[4895]: I0129 16:40:27.795307 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d483294-14b5-4b14-8e09-e88d4d83a359","Type":"ContainerDied","Data":"264882f64b463f8304f26b3bd4b3f52e66bdc82a4f7f16362d32f61fe4900221"} Jan 29 16:40:28 crc kubenswrapper[4895]: I0129 16:40:28.807558 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d483294-14b5-4b14-8e09-e88d4d83a359","Type":"ContainerStarted","Data":"8fe51b414263d5545397fbda4d8a5fc1da9c721341b31b6b81a87a7d05ac7fba"} Jan 29 16:40:28 crc kubenswrapper[4895]: I0129 16:40:28.808396 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 16:40:28 crc kubenswrapper[4895]: I0129 16:40:28.844463 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.84442873 podStartE2EDuration="36.84442873s" podCreationTimestamp="2026-01-29 16:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:28.835201989 +0000 UTC m=+1712.638179273" watchObservedRunningTime="2026-01-29 16:40:28.84442873 +0000 UTC m=+1712.647405994" Jan 29 16:40:29 crc kubenswrapper[4895]: I0129 16:40:29.819668 4895 generic.go:334] "Generic (PLEG): container finished" podID="e928bf68-d1d2-4d90-b479-f589568e5145" containerID="55cb6b21c0c00659e3125d0ebc78a5ce374b8f48ee3ab00facd996a77c627eb5" exitCode=0 Jan 29 16:40:29 crc kubenswrapper[4895]: I0129 16:40:29.819749 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e928bf68-d1d2-4d90-b479-f589568e5145","Type":"ContainerDied","Data":"55cb6b21c0c00659e3125d0ebc78a5ce374b8f48ee3ab00facd996a77c627eb5"} Jan 29 16:40:30 crc kubenswrapper[4895]: I0129 16:40:30.061266 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-mmtp6"] Jan 29 16:40:30 crc kubenswrapper[4895]: I0129 16:40:30.073927 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7s9sc"] Jan 29 16:40:30 crc kubenswrapper[4895]: I0129 16:40:30.083543 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-mmtp6"] Jan 29 16:40:30 crc kubenswrapper[4895]: I0129 16:40:30.095357 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7s9sc"] Jan 29 16:40:30 crc kubenswrapper[4895]: I0129 16:40:30.833448 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e928bf68-d1d2-4d90-b479-f589568e5145","Type":"ContainerStarted","Data":"76b464187c85d8af1a2708c5c3dcc8af2f84bb5cc41ab07f8c4b7045ea0183d4"} Jan 29 16:40:30 crc kubenswrapper[4895]: I0129 16:40:30.833989 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:40:30 crc kubenswrapper[4895]: I0129 16:40:30.873322 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.873296353 podStartE2EDuration="37.873296353s" podCreationTimestamp="2026-01-29 16:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:30.864487344 +0000 UTC m=+1714.667464628" watchObservedRunningTime="2026-01-29 16:40:30.873296353 +0000 UTC m=+1714.676273647" Jan 29 16:40:31 crc kubenswrapper[4895]: I0129 16:40:31.081636 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293" path="/var/lib/kubelet/pods/0f9f2f3b-db3e-4c7e-8ef2-4efc06b8b293/volumes" Jan 29 16:40:31 crc kubenswrapper[4895]: I0129 16:40:31.082703 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59598724-bed0-4b4c-9957-6c282df5b4a5" path="/var/lib/kubelet/pods/59598724-bed0-4b4c-9957-6c282df5b4a5/volumes" Jan 29 16:40:31 crc kubenswrapper[4895]: I0129 16:40:31.084073 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2vnnx"] Jan 29 16:40:31 crc kubenswrapper[4895]: I0129 16:40:31.084124 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2vnnx"] Jan 29 16:40:33 crc kubenswrapper[4895]: I0129 16:40:33.050231 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ca03651-86f1-4b94-bdfc-ff182c872873" path="/var/lib/kubelet/pods/4ca03651-86f1-4b94-bdfc-ff182c872873/volumes" Jan 29 16:40:34 crc kubenswrapper[4895]: I0129 16:40:34.039026 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bv8g6"] Jan 29 16:40:34 crc kubenswrapper[4895]: I0129 16:40:34.046965 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bv8g6"] Jan 29 16:40:35 crc kubenswrapper[4895]: I0129 16:40:35.050825 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257f0f91-6612-425d-9cff-50bc99ca7979" path="/var/lib/kubelet/pods/257f0f91-6612-425d-9cff-50bc99ca7979/volumes" Jan 29 16:40:36 crc kubenswrapper[4895]: I0129 16:40:36.907794 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" event={"ID":"c181aaa5-19e0-4d8b-807b-b494677ec871","Type":"ContainerStarted","Data":"5fc19dd6aafea7991a4517f16730cb95e6688ffd2964f1303031416048901a65"} Jan 29 16:40:36 crc kubenswrapper[4895]: I0129 16:40:36.932940 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" podStartSLOduration=1.70672124 podStartE2EDuration="10.932915049s" podCreationTimestamp="2026-01-29 16:40:26 +0000 UTC" firstStartedPulling="2026-01-29 16:40:27.273411716 +0000 UTC m=+1711.076388980" lastFinishedPulling="2026-01-29 16:40:36.499605515 +0000 UTC m=+1720.302582789" observedRunningTime="2026-01-29 16:40:36.923845304 +0000 UTC m=+1720.726822588" watchObservedRunningTime="2026-01-29 16:40:36.932915049 +0000 UTC m=+1720.735892343" Jan 29 16:40:37 crc kubenswrapper[4895]: I0129 16:40:37.044278 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:40:37 crc kubenswrapper[4895]: E0129 16:40:37.044639 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:40:42 crc kubenswrapper[4895]: I0129 16:40:42.737440 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 16:40:43 crc kubenswrapper[4895]: I0129 16:40:43.733216 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:40:45 crc kubenswrapper[4895]: I0129 16:40:45.050087 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6fv5s"] Jan 29 16:40:45 crc kubenswrapper[4895]: I0129 16:40:45.050431 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6fv5s"] Jan 29 16:40:47 crc kubenswrapper[4895]: I0129 16:40:47.048682 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01eead73-2722-45a1-a5f1-fa4522c0041b" path="/var/lib/kubelet/pods/01eead73-2722-45a1-a5f1-fa4522c0041b/volumes" Jan 29 16:40:48 crc kubenswrapper[4895]: I0129 16:40:48.016460 4895 generic.go:334] "Generic (PLEG): container finished" podID="c181aaa5-19e0-4d8b-807b-b494677ec871" containerID="5fc19dd6aafea7991a4517f16730cb95e6688ffd2964f1303031416048901a65" exitCode=0 Jan 29 16:40:48 crc kubenswrapper[4895]: I0129 16:40:48.016514 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" event={"ID":"c181aaa5-19e0-4d8b-807b-b494677ec871","Type":"ContainerDied","Data":"5fc19dd6aafea7991a4517f16730cb95e6688ffd2964f1303031416048901a65"} Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.492153 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.599569 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-inventory\") pod \"c181aaa5-19e0-4d8b-807b-b494677ec871\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.599617 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-repo-setup-combined-ca-bundle\") pod \"c181aaa5-19e0-4d8b-807b-b494677ec871\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.599655 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lwm6\" (UniqueName: \"kubernetes.io/projected/c181aaa5-19e0-4d8b-807b-b494677ec871-kube-api-access-7lwm6\") pod \"c181aaa5-19e0-4d8b-807b-b494677ec871\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.599685 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-ssh-key-openstack-edpm-ipam\") pod \"c181aaa5-19e0-4d8b-807b-b494677ec871\" (UID: \"c181aaa5-19e0-4d8b-807b-b494677ec871\") " Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.606860 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c181aaa5-19e0-4d8b-807b-b494677ec871" (UID: "c181aaa5-19e0-4d8b-807b-b494677ec871"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.608435 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c181aaa5-19e0-4d8b-807b-b494677ec871-kube-api-access-7lwm6" (OuterVolumeSpecName: "kube-api-access-7lwm6") pod "c181aaa5-19e0-4d8b-807b-b494677ec871" (UID: "c181aaa5-19e0-4d8b-807b-b494677ec871"). InnerVolumeSpecName "kube-api-access-7lwm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.631389 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-inventory" (OuterVolumeSpecName: "inventory") pod "c181aaa5-19e0-4d8b-807b-b494677ec871" (UID: "c181aaa5-19e0-4d8b-807b-b494677ec871"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.632634 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c181aaa5-19e0-4d8b-807b-b494677ec871" (UID: "c181aaa5-19e0-4d8b-807b-b494677ec871"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.703547 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.703647 4895 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.703681 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lwm6\" (UniqueName: \"kubernetes.io/projected/c181aaa5-19e0-4d8b-807b-b494677ec871-kube-api-access-7lwm6\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:49 crc kubenswrapper[4895]: I0129 16:40:49.703711 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c181aaa5-19e0-4d8b-807b-b494677ec871-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.036709 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:40:50 crc kubenswrapper[4895]: E0129 16:40:50.037480 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.037579 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" event={"ID":"c181aaa5-19e0-4d8b-807b-b494677ec871","Type":"ContainerDied","Data":"7d9dd6c44cb30d18b02a7309e8541ddde5c25a2216f9420bc374443dd2b487fe"} Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.037617 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d9dd6c44cb30d18b02a7309e8541ddde5c25a2216f9420bc374443dd2b487fe" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.037668 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.136447 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2"] Jan 29 16:40:50 crc kubenswrapper[4895]: E0129 16:40:50.136984 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c181aaa5-19e0-4d8b-807b-b494677ec871" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.137010 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c181aaa5-19e0-4d8b-807b-b494677ec871" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.137221 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c181aaa5-19e0-4d8b-807b-b494677ec871" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.137981 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.140830 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.142793 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.143122 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.144232 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.161997 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2"] Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.215213 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vmx9\" (UniqueName: \"kubernetes.io/projected/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-kube-api-access-5vmx9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.215307 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.215371 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.215508 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.317787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.318268 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.318499 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.318708 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vmx9\" (UniqueName: \"kubernetes.io/projected/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-kube-api-access-5vmx9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.323944 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.324535 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.325264 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.335974 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vmx9\" (UniqueName: \"kubernetes.io/projected/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-kube-api-access-5vmx9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:50 crc kubenswrapper[4895]: I0129 16:40:50.459385 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:40:51 crc kubenswrapper[4895]: I0129 16:40:51.006622 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2"] Jan 29 16:40:51 crc kubenswrapper[4895]: W0129 16:40:51.007679 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e9a6bba_a6c1_4ee0_89a6_e36735a9e18b.slice/crio-f509e6f96f68d0ea2f972bb6f37bcbcc0a53fb8b5256be76530ea1af5293215c WatchSource:0}: Error finding container f509e6f96f68d0ea2f972bb6f37bcbcc0a53fb8b5256be76530ea1af5293215c: Status 404 returned error can't find the container with id f509e6f96f68d0ea2f972bb6f37bcbcc0a53fb8b5256be76530ea1af5293215c Jan 29 16:40:51 crc kubenswrapper[4895]: I0129 16:40:51.074839 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" event={"ID":"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b","Type":"ContainerStarted","Data":"f509e6f96f68d0ea2f972bb6f37bcbcc0a53fb8b5256be76530ea1af5293215c"} Jan 29 16:40:52 crc kubenswrapper[4895]: I0129 16:40:52.087670 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" event={"ID":"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b","Type":"ContainerStarted","Data":"7831da4cbb000ea4ab55af544abfd485474a959c09bca57f168c7c0ea23536cc"} Jan 29 16:40:52 crc kubenswrapper[4895]: I0129 16:40:52.129763 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" podStartSLOduration=1.715517206 podStartE2EDuration="2.129741852s" podCreationTimestamp="2026-01-29 16:40:50 +0000 UTC" firstStartedPulling="2026-01-29 16:40:51.012389034 +0000 UTC m=+1734.815366298" lastFinishedPulling="2026-01-29 16:40:51.42661368 +0000 UTC m=+1735.229590944" observedRunningTime="2026-01-29 16:40:52.123492632 +0000 UTC m=+1735.926469916" watchObservedRunningTime="2026-01-29 16:40:52.129741852 +0000 UTC m=+1735.932719116" Jan 29 16:40:59 crc kubenswrapper[4895]: I0129 16:40:59.891526 4895 scope.go:117] "RemoveContainer" containerID="769bf513924f314d7ab909e795ad11a1fbd5bf38330c8079d419322f03184dfa" Jan 29 16:41:00 crc kubenswrapper[4895]: I0129 16:41:00.353746 4895 scope.go:117] "RemoveContainer" containerID="007ad5d447907115a9942c2166eab6fbbbd96d2723d28430f48cfdde3e3059ef" Jan 29 16:41:00 crc kubenswrapper[4895]: I0129 16:41:00.453704 4895 scope.go:117] "RemoveContainer" containerID="817136ba4b1e589887ea981ba1282ee97d636e6e1ba7eaa8b8e35ff17d55e463" Jan 29 16:41:00 crc kubenswrapper[4895]: I0129 16:41:00.805772 4895 scope.go:117] "RemoveContainer" containerID="7bda84e034982ccb40df97d698ecaccef02e743160c5520305f9dbda15e89011" Jan 29 16:41:00 crc kubenswrapper[4895]: I0129 16:41:00.905149 4895 scope.go:117] "RemoveContainer" containerID="43771ead4d0270ea5be9d2e1d7a0fa9fd7b0714ecda6af8c8c5321a8a71efc8d" Jan 29 16:41:00 crc kubenswrapper[4895]: I0129 16:41:00.964124 4895 scope.go:117] "RemoveContainer" containerID="f30403d180e1aefa552e8e24e6e15ea26d4837b48e94805a66494cd91129f917" Jan 29 16:41:05 crc kubenswrapper[4895]: I0129 16:41:05.038040 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:41:05 crc kubenswrapper[4895]: E0129 16:41:05.039828 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:41:15 crc kubenswrapper[4895]: I0129 16:41:15.054300 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xdhz7"] Jan 29 16:41:15 crc kubenswrapper[4895]: I0129 16:41:15.064039 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-sb5kw"] Jan 29 16:41:15 crc kubenswrapper[4895]: I0129 16:41:15.073730 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xdhz7"] Jan 29 16:41:15 crc kubenswrapper[4895]: I0129 16:41:15.082435 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-h9tc2"] Jan 29 16:41:15 crc kubenswrapper[4895]: I0129 16:41:15.090391 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-sb5kw"] Jan 29 16:41:15 crc kubenswrapper[4895]: I0129 16:41:15.100158 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-h9tc2"] Jan 29 16:41:15 crc kubenswrapper[4895]: I0129 16:41:15.107709 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-922d-account-create-update-r88cx"] Jan 29 16:41:15 crc kubenswrapper[4895]: I0129 16:41:15.115270 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-922d-account-create-update-r88cx"] Jan 29 16:41:16 crc kubenswrapper[4895]: I0129 16:41:16.031616 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5ca0-account-create-update-rpvdg"] Jan 29 16:41:16 crc kubenswrapper[4895]: I0129 16:41:16.042524 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cc6a-account-create-update-4dcck"] Jan 29 16:41:16 crc kubenswrapper[4895]: I0129 16:41:16.051262 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5ca0-account-create-update-rpvdg"] Jan 29 16:41:16 crc kubenswrapper[4895]: I0129 16:41:16.059486 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cc6a-account-create-update-4dcck"] Jan 29 16:41:17 crc kubenswrapper[4895]: I0129 16:41:17.042723 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:41:17 crc kubenswrapper[4895]: E0129 16:41:17.043107 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:41:17 crc kubenswrapper[4895]: I0129 16:41:17.082247 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d6c2e3-3343-463d-b1bf-096a5a7b5108" path="/var/lib/kubelet/pods/02d6c2e3-3343-463d-b1bf-096a5a7b5108/volumes" Jan 29 16:41:17 crc kubenswrapper[4895]: I0129 16:41:17.084644 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2d8de9-c774-4b94-8bab-0ba6b70dde52" path="/var/lib/kubelet/pods/0c2d8de9-c774-4b94-8bab-0ba6b70dde52/volumes" Jan 29 16:41:17 crc kubenswrapper[4895]: I0129 16:41:17.085706 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35221a68-d9d1-4630-8ade-81eca4fb1a57" path="/var/lib/kubelet/pods/35221a68-d9d1-4630-8ade-81eca4fb1a57/volumes" Jan 29 16:41:17 crc kubenswrapper[4895]: I0129 16:41:17.086756 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3707acb-376a-4821-b6cb-6f1e0a617931" path="/var/lib/kubelet/pods/b3707acb-376a-4821-b6cb-6f1e0a617931/volumes" Jan 29 16:41:17 crc kubenswrapper[4895]: I0129 16:41:17.088053 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce84108-fcb8-4d23-8575-95458b165761" path="/var/lib/kubelet/pods/bce84108-fcb8-4d23-8575-95458b165761/volumes" Jan 29 16:41:17 crc kubenswrapper[4895]: I0129 16:41:17.088789 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d250ad9a-db3b-4e4b-a5f8-1b4ab945c278" path="/var/lib/kubelet/pods/d250ad9a-db3b-4e4b-a5f8-1b4ab945c278/volumes" Jan 29 16:41:32 crc kubenswrapper[4895]: I0129 16:41:32.037305 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:41:32 crc kubenswrapper[4895]: E0129 16:41:32.038311 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:41:43 crc kubenswrapper[4895]: I0129 16:41:43.037335 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:41:43 crc kubenswrapper[4895]: E0129 16:41:43.038282 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:41:48 crc kubenswrapper[4895]: I0129 16:41:48.097332 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k74gb"] Jan 29 16:41:48 crc kubenswrapper[4895]: I0129 16:41:48.104269 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-k74gb"] Jan 29 16:41:49 crc kubenswrapper[4895]: I0129 16:41:49.050058 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="066caef9-34c4-40a1-b7d4-cdfb48c02fe4" path="/var/lib/kubelet/pods/066caef9-34c4-40a1-b7d4-cdfb48c02fe4/volumes" Jan 29 16:41:56 crc kubenswrapper[4895]: I0129 16:41:56.037631 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:41:56 crc kubenswrapper[4895]: E0129 16:41:56.038904 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:42:01 crc kubenswrapper[4895]: I0129 16:42:01.140737 4895 scope.go:117] "RemoveContainer" containerID="dd3b74b9375bc6120a1fd38e24b9127aeebdb5398ba001065badc08cc5ba3188" Jan 29 16:42:01 crc kubenswrapper[4895]: I0129 16:42:01.174996 4895 scope.go:117] "RemoveContainer" containerID="9046ee3b885bd70a3e4f41768dbbfaf18578f4df0f31b545d1f61793983a30aa" Jan 29 16:42:01 crc kubenswrapper[4895]: I0129 16:42:01.225608 4895 scope.go:117] "RemoveContainer" containerID="11bf490bb5a899a45bacba94b61047fe1def78fd30a8fee4035b5a5b459698b0" Jan 29 16:42:01 crc kubenswrapper[4895]: I0129 16:42:01.265796 4895 scope.go:117] "RemoveContainer" containerID="9a0beeaf50f23e3987d88c9287c6434e6cf1eb4e578eafe6d1abd2f69f373769" Jan 29 16:42:01 crc kubenswrapper[4895]: I0129 16:42:01.312250 4895 scope.go:117] "RemoveContainer" containerID="1919c85f4c8b420396935fabcb6a0ce3bd7dbb180ce7231267d960897ab20c45" Jan 29 16:42:01 crc kubenswrapper[4895]: I0129 16:42:01.354417 4895 scope.go:117] "RemoveContainer" containerID="589cebf9efc55704e524d2404473ab8e5ebae744d498ea8eeba43dd05a1b3c47" Jan 29 16:42:01 crc kubenswrapper[4895]: I0129 16:42:01.416709 4895 scope.go:117] "RemoveContainer" containerID="acc8eb58be376a4d3fcc57bb807fb6038e855125c8717fd23fcf1d1abfa6b615" Jan 29 16:42:09 crc kubenswrapper[4895]: I0129 16:42:09.038167 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:42:09 crc kubenswrapper[4895]: E0129 16:42:09.039908 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:42:21 crc kubenswrapper[4895]: I0129 16:42:21.056026 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-vmblf"] Jan 29 16:42:21 crc kubenswrapper[4895]: I0129 16:42:21.068235 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-vmblf"] Jan 29 16:42:22 crc kubenswrapper[4895]: I0129 16:42:22.037242 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:42:22 crc kubenswrapper[4895]: E0129 16:42:22.038077 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:42:23 crc kubenswrapper[4895]: I0129 16:42:23.048938 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de7c5e0-275a-4e2e-9451-a653e428b29f" path="/var/lib/kubelet/pods/8de7c5e0-275a-4e2e-9451-a653e428b29f/volumes" Jan 29 16:42:23 crc kubenswrapper[4895]: I0129 16:42:23.049591 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bt4wg"] Jan 29 16:42:23 crc kubenswrapper[4895]: I0129 16:42:23.049619 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bt4wg"] Jan 29 16:42:25 crc kubenswrapper[4895]: I0129 16:42:25.047412 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c09db6-a699-4b75-9503-08bfc7ad94c1" path="/var/lib/kubelet/pods/53c09db6-a699-4b75-9503-08bfc7ad94c1/volumes" Jan 29 16:42:36 crc kubenswrapper[4895]: I0129 16:42:36.037824 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:42:36 crc kubenswrapper[4895]: E0129 16:42:36.038793 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:42:49 crc kubenswrapper[4895]: I0129 16:42:49.037414 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:42:49 crc kubenswrapper[4895]: E0129 16:42:49.038524 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:43:01 crc kubenswrapper[4895]: I0129 16:43:01.571103 4895 scope.go:117] "RemoveContainer" containerID="0f8410cb12cd607e0e7d025327a65a95e2311d578ae6378d2fb16e40632b8c8f" Jan 29 16:43:01 crc kubenswrapper[4895]: I0129 16:43:01.660349 4895 scope.go:117] "RemoveContainer" containerID="5c4b8cb083834686bf7c5ccbd359b21df14c20fbf9acc45a2f692c7e8468af56" Jan 29 16:43:04 crc kubenswrapper[4895]: I0129 16:43:04.037352 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:43:04 crc kubenswrapper[4895]: E0129 16:43:04.038604 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:43:06 crc kubenswrapper[4895]: I0129 16:43:06.051562 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-9njnh"] Jan 29 16:43:06 crc kubenswrapper[4895]: I0129 16:43:06.065026 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-9njnh"] Jan 29 16:43:07 crc kubenswrapper[4895]: I0129 16:43:07.049025 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf0090e-70c1-40ea-9cb0-e880c5c95c26" path="/var/lib/kubelet/pods/2cf0090e-70c1-40ea-9cb0-e880c5c95c26/volumes" Jan 29 16:43:17 crc kubenswrapper[4895]: I0129 16:43:17.048096 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:43:17 crc kubenswrapper[4895]: E0129 16:43:17.049924 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:43:31 crc kubenswrapper[4895]: I0129 16:43:31.037567 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:43:31 crc kubenswrapper[4895]: E0129 16:43:31.038796 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:43:46 crc kubenswrapper[4895]: I0129 16:43:46.037907 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:43:46 crc kubenswrapper[4895]: E0129 16:43:46.039126 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:44:01 crc kubenswrapper[4895]: I0129 16:44:01.037291 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:44:01 crc kubenswrapper[4895]: E0129 16:44:01.038528 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:44:01 crc kubenswrapper[4895]: I0129 16:44:01.784614 4895 scope.go:117] "RemoveContainer" containerID="a83f1deb854eccbec4e80432fa04db33e0fee10c0969cf11a8d09ab0069417a7" Jan 29 16:44:04 crc kubenswrapper[4895]: E0129 16:44:04.772316 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e9a6bba_a6c1_4ee0_89a6_e36735a9e18b.slice/crio-conmon-7831da4cbb000ea4ab55af544abfd485474a959c09bca57f168c7c0ea23536cc.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:44:05 crc kubenswrapper[4895]: I0129 16:44:05.041543 4895 generic.go:334] "Generic (PLEG): container finished" podID="5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b" containerID="7831da4cbb000ea4ab55af544abfd485474a959c09bca57f168c7c0ea23536cc" exitCode=0 Jan 29 16:44:05 crc kubenswrapper[4895]: I0129 16:44:05.048047 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" event={"ID":"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b","Type":"ContainerDied","Data":"7831da4cbb000ea4ab55af544abfd485474a959c09bca57f168c7c0ea23536cc"} Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.469207 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.634354 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vmx9\" (UniqueName: \"kubernetes.io/projected/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-kube-api-access-5vmx9\") pod \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.634451 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-ssh-key-openstack-edpm-ipam\") pod \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.634981 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-bootstrap-combined-ca-bundle\") pod \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.635027 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-inventory\") pod \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\" (UID: \"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b\") " Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.642509 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-kube-api-access-5vmx9" (OuterVolumeSpecName: "kube-api-access-5vmx9") pod "5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b" (UID: "5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b"). InnerVolumeSpecName "kube-api-access-5vmx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.643199 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b" (UID: "5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.664813 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-inventory" (OuterVolumeSpecName: "inventory") pod "5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b" (UID: "5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.666576 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b" (UID: "5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.737420 4895 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.737491 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.737514 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vmx9\" (UniqueName: \"kubernetes.io/projected/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-kube-api-access-5vmx9\") on node \"crc\" DevicePath \"\"" Jan 29 16:44:06 crc kubenswrapper[4895]: I0129 16:44:06.737534 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.061094 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" event={"ID":"5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b","Type":"ContainerDied","Data":"f509e6f96f68d0ea2f972bb6f37bcbcc0a53fb8b5256be76530ea1af5293215c"} Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.061147 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f509e6f96f68d0ea2f972bb6f37bcbcc0a53fb8b5256be76530ea1af5293215c" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.061210 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.172143 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts"] Jan 29 16:44:07 crc kubenswrapper[4895]: E0129 16:44:07.172784 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.172807 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.173717 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.174447 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.177886 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.177966 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.178742 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.179024 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.189605 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts"] Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.349717 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99srl\" (UniqueName: \"kubernetes.io/projected/becfe08f-840f-4dbe-9bef-a7dc07254f3c-kube-api-access-99srl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jjts\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.349791 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jjts\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.351048 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jjts\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.455287 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99srl\" (UniqueName: \"kubernetes.io/projected/becfe08f-840f-4dbe-9bef-a7dc07254f3c-kube-api-access-99srl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jjts\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.455419 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jjts\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.455579 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jjts\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.465507 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jjts\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.465586 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jjts\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.477589 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99srl\" (UniqueName: \"kubernetes.io/projected/becfe08f-840f-4dbe-9bef-a7dc07254f3c-kube-api-access-99srl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7jjts\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:44:07 crc kubenswrapper[4895]: I0129 16:44:07.491215 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:44:08 crc kubenswrapper[4895]: I0129 16:44:08.103701 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts"] Jan 29 16:44:08 crc kubenswrapper[4895]: I0129 16:44:08.110808 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:44:09 crc kubenswrapper[4895]: I0129 16:44:09.086585 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" event={"ID":"becfe08f-840f-4dbe-9bef-a7dc07254f3c","Type":"ContainerStarted","Data":"628d1b4bcc8e8d296ac0c687451b1503607adc55cb9a6cb906a3c5a45ae96f01"} Jan 29 16:44:09 crc kubenswrapper[4895]: I0129 16:44:09.087017 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" event={"ID":"becfe08f-840f-4dbe-9bef-a7dc07254f3c","Type":"ContainerStarted","Data":"69bc89a9fae1f2ec3ab349544b7454c4969363abd7dd07e14897e9cde281f47d"} Jan 29 16:44:09 crc kubenswrapper[4895]: I0129 16:44:09.106466 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" podStartSLOduration=1.7055098709999998 podStartE2EDuration="2.106439148s" podCreationTimestamp="2026-01-29 16:44:07 +0000 UTC" firstStartedPulling="2026-01-29 16:44:08.110542159 +0000 UTC m=+1931.913519423" lastFinishedPulling="2026-01-29 16:44:08.511471426 +0000 UTC m=+1932.314448700" observedRunningTime="2026-01-29 16:44:09.103830478 +0000 UTC m=+1932.906807762" watchObservedRunningTime="2026-01-29 16:44:09.106439148 +0000 UTC m=+1932.909416412" Jan 29 16:44:12 crc kubenswrapper[4895]: I0129 16:44:12.037533 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:44:12 crc kubenswrapper[4895]: E0129 16:44:12.038330 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:44:27 crc kubenswrapper[4895]: I0129 16:44:27.046343 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:44:27 crc kubenswrapper[4895]: E0129 16:44:27.047391 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:44:38 crc kubenswrapper[4895]: I0129 16:44:38.036812 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:44:38 crc kubenswrapper[4895]: E0129 16:44:38.037738 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:44:49 crc kubenswrapper[4895]: I0129 16:44:49.037445 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:44:49 crc kubenswrapper[4895]: E0129 16:44:49.038463 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.175204 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh"] Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.177804 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.181590 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.181722 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.188359 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh"] Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.219595 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-secret-volume\") pod \"collect-profiles-29495085-7t2dh\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.219690 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-config-volume\") pod \"collect-profiles-29495085-7t2dh\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.219812 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrlwh\" (UniqueName: \"kubernetes.io/projected/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-kube-api-access-qrlwh\") pod \"collect-profiles-29495085-7t2dh\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.322148 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-config-volume\") pod \"collect-profiles-29495085-7t2dh\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.322279 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrlwh\" (UniqueName: \"kubernetes.io/projected/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-kube-api-access-qrlwh\") pod \"collect-profiles-29495085-7t2dh\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.322336 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-secret-volume\") pod \"collect-profiles-29495085-7t2dh\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.323397 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-config-volume\") pod \"collect-profiles-29495085-7t2dh\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.339673 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-secret-volume\") pod \"collect-profiles-29495085-7t2dh\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.346197 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrlwh\" (UniqueName: \"kubernetes.io/projected/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-kube-api-access-qrlwh\") pod \"collect-profiles-29495085-7t2dh\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:00 crc kubenswrapper[4895]: I0129 16:45:00.508986 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:01 crc kubenswrapper[4895]: I0129 16:45:01.002023 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh"] Jan 29 16:45:01 crc kubenswrapper[4895]: I0129 16:45:01.037363 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:45:01 crc kubenswrapper[4895]: I0129 16:45:01.601580 4895 generic.go:334] "Generic (PLEG): container finished" podID="0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73" containerID="92c6d1ac0d4adad01c597565213f0e3e78aff25743a51bc8187551e65a564017" exitCode=0 Jan 29 16:45:01 crc kubenswrapper[4895]: I0129 16:45:01.601682 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" event={"ID":"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73","Type":"ContainerDied","Data":"92c6d1ac0d4adad01c597565213f0e3e78aff25743a51bc8187551e65a564017"} Jan 29 16:45:01 crc kubenswrapper[4895]: I0129 16:45:01.602082 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" event={"ID":"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73","Type":"ContainerStarted","Data":"1a7854b9b86b7223902bced77734665749aab60640cc1a47bffddc30cb3fc96e"} Jan 29 16:45:01 crc kubenswrapper[4895]: I0129 16:45:01.606354 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"d3e768471e09a634e8a9a0e0fd365a02d913a3c9ced9b63c4b19dc39c06cee68"} Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.061846 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.196086 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrlwh\" (UniqueName: \"kubernetes.io/projected/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-kube-api-access-qrlwh\") pod \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.196203 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-secret-volume\") pod \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.196248 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-config-volume\") pod \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\" (UID: \"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73\") " Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.197502 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-config-volume" (OuterVolumeSpecName: "config-volume") pod "0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73" (UID: "0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.204085 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-kube-api-access-qrlwh" (OuterVolumeSpecName: "kube-api-access-qrlwh") pod "0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73" (UID: "0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73"). InnerVolumeSpecName "kube-api-access-qrlwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.204177 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73" (UID: "0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.298562 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrlwh\" (UniqueName: \"kubernetes.io/projected/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-kube-api-access-qrlwh\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.298615 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.298626 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.624686 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" event={"ID":"0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73","Type":"ContainerDied","Data":"1a7854b9b86b7223902bced77734665749aab60640cc1a47bffddc30cb3fc96e"} Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.624739 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a7854b9b86b7223902bced77734665749aab60640cc1a47bffddc30cb3fc96e" Jan 29 16:45:03 crc kubenswrapper[4895]: I0129 16:45:03.624861 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh" Jan 29 16:45:04 crc kubenswrapper[4895]: I0129 16:45:04.166355 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g"] Jan 29 16:45:04 crc kubenswrapper[4895]: I0129 16:45:04.178271 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-q8q6g"] Jan 29 16:45:05 crc kubenswrapper[4895]: I0129 16:45:05.049918 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da56ae41-00cb-4345-a6be-1ceb542b8afe" path="/var/lib/kubelet/pods/da56ae41-00cb-4345-a6be-1ceb542b8afe/volumes" Jan 29 16:45:18 crc kubenswrapper[4895]: I0129 16:45:18.759364 4895 generic.go:334] "Generic (PLEG): container finished" podID="becfe08f-840f-4dbe-9bef-a7dc07254f3c" containerID="628d1b4bcc8e8d296ac0c687451b1503607adc55cb9a6cb906a3c5a45ae96f01" exitCode=0 Jan 29 16:45:18 crc kubenswrapper[4895]: I0129 16:45:18.759470 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" event={"ID":"becfe08f-840f-4dbe-9bef-a7dc07254f3c","Type":"ContainerDied","Data":"628d1b4bcc8e8d296ac0c687451b1503607adc55cb9a6cb906a3c5a45ae96f01"} Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.184641 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.289243 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-inventory\") pod \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.289358 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-ssh-key-openstack-edpm-ipam\") pod \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.289451 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99srl\" (UniqueName: \"kubernetes.io/projected/becfe08f-840f-4dbe-9bef-a7dc07254f3c-kube-api-access-99srl\") pod \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\" (UID: \"becfe08f-840f-4dbe-9bef-a7dc07254f3c\") " Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.298083 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becfe08f-840f-4dbe-9bef-a7dc07254f3c-kube-api-access-99srl" (OuterVolumeSpecName: "kube-api-access-99srl") pod "becfe08f-840f-4dbe-9bef-a7dc07254f3c" (UID: "becfe08f-840f-4dbe-9bef-a7dc07254f3c"). InnerVolumeSpecName "kube-api-access-99srl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.323368 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "becfe08f-840f-4dbe-9bef-a7dc07254f3c" (UID: "becfe08f-840f-4dbe-9bef-a7dc07254f3c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.323721 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-inventory" (OuterVolumeSpecName: "inventory") pod "becfe08f-840f-4dbe-9bef-a7dc07254f3c" (UID: "becfe08f-840f-4dbe-9bef-a7dc07254f3c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.391970 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.392013 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/becfe08f-840f-4dbe-9bef-a7dc07254f3c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.392035 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99srl\" (UniqueName: \"kubernetes.io/projected/becfe08f-840f-4dbe-9bef-a7dc07254f3c-kube-api-access-99srl\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.786970 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" event={"ID":"becfe08f-840f-4dbe-9bef-a7dc07254f3c","Type":"ContainerDied","Data":"69bc89a9fae1f2ec3ab349544b7454c4969363abd7dd07e14897e9cde281f47d"} Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.787510 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69bc89a9fae1f2ec3ab349544b7454c4969363abd7dd07e14897e9cde281f47d" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.787124 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.882517 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n"] Jan 29 16:45:20 crc kubenswrapper[4895]: E0129 16:45:20.883365 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73" containerName="collect-profiles" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.883478 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73" containerName="collect-profiles" Jan 29 16:45:20 crc kubenswrapper[4895]: E0129 16:45:20.883625 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becfe08f-840f-4dbe-9bef-a7dc07254f3c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.883713 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="becfe08f-840f-4dbe-9bef-a7dc07254f3c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.884056 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73" containerName="collect-profiles" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.884147 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="becfe08f-840f-4dbe-9bef-a7dc07254f3c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.885067 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.888136 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.888304 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.888528 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.888536 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.898145 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n"] Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.901948 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8622n\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.902011 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8622n\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:20 crc kubenswrapper[4895]: I0129 16:45:20.902128 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxnk4\" (UniqueName: \"kubernetes.io/projected/7a3d7ffa-0c28-41bc-8701-e511ea083796-kube-api-access-dxnk4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8622n\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:21 crc kubenswrapper[4895]: I0129 16:45:21.004667 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8622n\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:21 crc kubenswrapper[4895]: I0129 16:45:21.004791 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8622n\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:21 crc kubenswrapper[4895]: I0129 16:45:21.004894 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxnk4\" (UniqueName: \"kubernetes.io/projected/7a3d7ffa-0c28-41bc-8701-e511ea083796-kube-api-access-dxnk4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8622n\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:21 crc kubenswrapper[4895]: I0129 16:45:21.009632 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8622n\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:21 crc kubenswrapper[4895]: I0129 16:45:21.011170 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8622n\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:21 crc kubenswrapper[4895]: I0129 16:45:21.027957 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxnk4\" (UniqueName: \"kubernetes.io/projected/7a3d7ffa-0c28-41bc-8701-e511ea083796-kube-api-access-dxnk4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8622n\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:21 crc kubenswrapper[4895]: I0129 16:45:21.209014 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:22 crc kubenswrapper[4895]: I0129 16:45:21.598179 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n"] Jan 29 16:45:22 crc kubenswrapper[4895]: W0129 16:45:21.614376 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a3d7ffa_0c28_41bc_8701_e511ea083796.slice/crio-5dc6b542dbb980ba54f89b86d305df80ae1fe81d2fb156d288b3dc1a87e4e72f WatchSource:0}: Error finding container 5dc6b542dbb980ba54f89b86d305df80ae1fe81d2fb156d288b3dc1a87e4e72f: Status 404 returned error can't find the container with id 5dc6b542dbb980ba54f89b86d305df80ae1fe81d2fb156d288b3dc1a87e4e72f Jan 29 16:45:22 crc kubenswrapper[4895]: I0129 16:45:21.802685 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" event={"ID":"7a3d7ffa-0c28-41bc-8701-e511ea083796","Type":"ContainerStarted","Data":"5dc6b542dbb980ba54f89b86d305df80ae1fe81d2fb156d288b3dc1a87e4e72f"} Jan 29 16:45:22 crc kubenswrapper[4895]: I0129 16:45:22.829269 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" event={"ID":"7a3d7ffa-0c28-41bc-8701-e511ea083796","Type":"ContainerStarted","Data":"d8ffa5e5f9cf3ab6b5ec82abdc4cc4fe49fa1ad96380ac220283be00a66f20d7"} Jan 29 16:45:22 crc kubenswrapper[4895]: I0129 16:45:22.890918 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" podStartSLOduration=2.47618804 podStartE2EDuration="2.890887471s" podCreationTimestamp="2026-01-29 16:45:20 +0000 UTC" firstStartedPulling="2026-01-29 16:45:21.621351669 +0000 UTC m=+2005.424328933" lastFinishedPulling="2026-01-29 16:45:22.0360511 +0000 UTC m=+2005.839028364" observedRunningTime="2026-01-29 16:45:22.881336063 +0000 UTC m=+2006.684313337" watchObservedRunningTime="2026-01-29 16:45:22.890887471 +0000 UTC m=+2006.693864745" Jan 29 16:45:27 crc kubenswrapper[4895]: E0129 16:45:27.002619 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a3d7ffa_0c28_41bc_8701_e511ea083796.slice/crio-conmon-d8ffa5e5f9cf3ab6b5ec82abdc4cc4fe49fa1ad96380ac220283be00a66f20d7.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:45:27 crc kubenswrapper[4895]: I0129 16:45:27.880117 4895 generic.go:334] "Generic (PLEG): container finished" podID="7a3d7ffa-0c28-41bc-8701-e511ea083796" containerID="d8ffa5e5f9cf3ab6b5ec82abdc4cc4fe49fa1ad96380ac220283be00a66f20d7" exitCode=0 Jan 29 16:45:27 crc kubenswrapper[4895]: I0129 16:45:27.880207 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" event={"ID":"7a3d7ffa-0c28-41bc-8701-e511ea083796","Type":"ContainerDied","Data":"d8ffa5e5f9cf3ab6b5ec82abdc4cc4fe49fa1ad96380ac220283be00a66f20d7"} Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.338600 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.404542 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-inventory\") pod \"7a3d7ffa-0c28-41bc-8701-e511ea083796\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.404663 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-ssh-key-openstack-edpm-ipam\") pod \"7a3d7ffa-0c28-41bc-8701-e511ea083796\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.405050 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxnk4\" (UniqueName: \"kubernetes.io/projected/7a3d7ffa-0c28-41bc-8701-e511ea083796-kube-api-access-dxnk4\") pod \"7a3d7ffa-0c28-41bc-8701-e511ea083796\" (UID: \"7a3d7ffa-0c28-41bc-8701-e511ea083796\") " Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.411091 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a3d7ffa-0c28-41bc-8701-e511ea083796-kube-api-access-dxnk4" (OuterVolumeSpecName: "kube-api-access-dxnk4") pod "7a3d7ffa-0c28-41bc-8701-e511ea083796" (UID: "7a3d7ffa-0c28-41bc-8701-e511ea083796"). InnerVolumeSpecName "kube-api-access-dxnk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.432182 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7a3d7ffa-0c28-41bc-8701-e511ea083796" (UID: "7a3d7ffa-0c28-41bc-8701-e511ea083796"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.434701 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-inventory" (OuterVolumeSpecName: "inventory") pod "7a3d7ffa-0c28-41bc-8701-e511ea083796" (UID: "7a3d7ffa-0c28-41bc-8701-e511ea083796"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.510922 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.510970 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a3d7ffa-0c28-41bc-8701-e511ea083796-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.510983 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxnk4\" (UniqueName: \"kubernetes.io/projected/7a3d7ffa-0c28-41bc-8701-e511ea083796-kube-api-access-dxnk4\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.908826 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" event={"ID":"7a3d7ffa-0c28-41bc-8701-e511ea083796","Type":"ContainerDied","Data":"5dc6b542dbb980ba54f89b86d305df80ae1fe81d2fb156d288b3dc1a87e4e72f"} Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.908903 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dc6b542dbb980ba54f89b86d305df80ae1fe81d2fb156d288b3dc1a87e4e72f" Jan 29 16:45:29 crc kubenswrapper[4895]: I0129 16:45:29.908970 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:29.999983 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k"] Jan 29 16:45:30 crc kubenswrapper[4895]: E0129 16:45:30.000436 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a3d7ffa-0c28-41bc-8701-e511ea083796" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.000458 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a3d7ffa-0c28-41bc-8701-e511ea083796" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.000704 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a3d7ffa-0c28-41bc-8701-e511ea083796" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.001522 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.005350 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.010674 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.012786 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.014846 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.016149 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k"] Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.125285 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-88h6k\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.125423 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-947qx\" (UniqueName: \"kubernetes.io/projected/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-kube-api-access-947qx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-88h6k\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.125552 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-88h6k\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.227634 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-947qx\" (UniqueName: \"kubernetes.io/projected/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-kube-api-access-947qx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-88h6k\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.227772 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-88h6k\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.227905 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-88h6k\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.233910 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-88h6k\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.237434 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-88h6k\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.254236 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-947qx\" (UniqueName: \"kubernetes.io/projected/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-kube-api-access-947qx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-88h6k\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.324901 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:45:30 crc kubenswrapper[4895]: I0129 16:45:30.914842 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k"] Jan 29 16:45:31 crc kubenswrapper[4895]: I0129 16:45:31.929618 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" event={"ID":"3e71dec6-dfb5-4b6d-822f-0b1da02025ce","Type":"ContainerStarted","Data":"e8e8ea1e5d6a1727355fe0091fa5433c7fd8c2a49bbe0a65d29c59e026fd92cd"} Jan 29 16:45:32 crc kubenswrapper[4895]: I0129 16:45:32.940129 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" event={"ID":"3e71dec6-dfb5-4b6d-822f-0b1da02025ce","Type":"ContainerStarted","Data":"4e29b2a01e6c3c212b51ac724a9d8d9277ecc863bc4365eaa8e4d3887cebb820"} Jan 29 16:45:32 crc kubenswrapper[4895]: I0129 16:45:32.966908 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" podStartSLOduration=2.774930708 podStartE2EDuration="3.966851328s" podCreationTimestamp="2026-01-29 16:45:29 +0000 UTC" firstStartedPulling="2026-01-29 16:45:30.92485321 +0000 UTC m=+2014.727830474" lastFinishedPulling="2026-01-29 16:45:32.11677379 +0000 UTC m=+2015.919751094" observedRunningTime="2026-01-29 16:45:32.958705048 +0000 UTC m=+2016.761682312" watchObservedRunningTime="2026-01-29 16:45:32.966851328 +0000 UTC m=+2016.769828642" Jan 29 16:46:01 crc kubenswrapper[4895]: I0129 16:46:01.889566 4895 scope.go:117] "RemoveContainer" containerID="ce3b60ed6622a0491d64cab367020660c45750d6cfaaf6ad2aefd96bcb5b7fd6" Jan 29 16:46:08 crc kubenswrapper[4895]: I0129 16:46:08.610500 4895 generic.go:334] "Generic (PLEG): container finished" podID="3e71dec6-dfb5-4b6d-822f-0b1da02025ce" containerID="4e29b2a01e6c3c212b51ac724a9d8d9277ecc863bc4365eaa8e4d3887cebb820" exitCode=0 Jan 29 16:46:08 crc kubenswrapper[4895]: I0129 16:46:08.610567 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" event={"ID":"3e71dec6-dfb5-4b6d-822f-0b1da02025ce","Type":"ContainerDied","Data":"4e29b2a01e6c3c212b51ac724a9d8d9277ecc863bc4365eaa8e4d3887cebb820"} Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.114098 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.253062 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-inventory\") pod \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.253743 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-ssh-key-openstack-edpm-ipam\") pod \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.254007 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-947qx\" (UniqueName: \"kubernetes.io/projected/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-kube-api-access-947qx\") pod \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\" (UID: \"3e71dec6-dfb5-4b6d-822f-0b1da02025ce\") " Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.260774 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-kube-api-access-947qx" (OuterVolumeSpecName: "kube-api-access-947qx") pod "3e71dec6-dfb5-4b6d-822f-0b1da02025ce" (UID: "3e71dec6-dfb5-4b6d-822f-0b1da02025ce"). InnerVolumeSpecName "kube-api-access-947qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.280853 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3e71dec6-dfb5-4b6d-822f-0b1da02025ce" (UID: "3e71dec6-dfb5-4b6d-822f-0b1da02025ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.298010 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-inventory" (OuterVolumeSpecName: "inventory") pod "3e71dec6-dfb5-4b6d-822f-0b1da02025ce" (UID: "3e71dec6-dfb5-4b6d-822f-0b1da02025ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.356901 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.356959 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-947qx\" (UniqueName: \"kubernetes.io/projected/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-kube-api-access-947qx\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.356975 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e71dec6-dfb5-4b6d-822f-0b1da02025ce-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.638791 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" event={"ID":"3e71dec6-dfb5-4b6d-822f-0b1da02025ce","Type":"ContainerDied","Data":"e8e8ea1e5d6a1727355fe0091fa5433c7fd8c2a49bbe0a65d29c59e026fd92cd"} Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.638901 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e8ea1e5d6a1727355fe0091fa5433c7fd8c2a49bbe0a65d29c59e026fd92cd" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.639050 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.751677 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h"] Jan 29 16:46:10 crc kubenswrapper[4895]: E0129 16:46:10.752306 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e71dec6-dfb5-4b6d-822f-0b1da02025ce" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.752331 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e71dec6-dfb5-4b6d-822f-0b1da02025ce" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.752601 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e71dec6-dfb5-4b6d-822f-0b1da02025ce" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.753533 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.756443 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.757587 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.757601 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.762440 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h"] Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.763010 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.869743 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.869910 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t96n\" (UniqueName: \"kubernetes.io/projected/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-kube-api-access-8t96n\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.869960 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.972012 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.972255 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t96n\" (UniqueName: \"kubernetes.io/projected/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-kube-api-access-8t96n\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.972299 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.976456 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.978936 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:10 crc kubenswrapper[4895]: I0129 16:46:10.990038 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t96n\" (UniqueName: \"kubernetes.io/projected/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-kube-api-access-8t96n\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:11 crc kubenswrapper[4895]: I0129 16:46:11.074957 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:11 crc kubenswrapper[4895]: I0129 16:46:11.742073 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h"] Jan 29 16:46:12 crc kubenswrapper[4895]: I0129 16:46:12.659448 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" event={"ID":"76358913-8879-4dd3-8ca1-8dae5ff9a1b2","Type":"ContainerStarted","Data":"21156b8d85b8e756c417af7f1e53e9715f39bfc6c73eb813d8509a78110046a1"} Jan 29 16:46:13 crc kubenswrapper[4895]: I0129 16:46:13.676600 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" event={"ID":"76358913-8879-4dd3-8ca1-8dae5ff9a1b2","Type":"ContainerStarted","Data":"4cd308f04a0f081cde556b0d300ce6ec256329a79c6bb3e0335d7d10e0f41f13"} Jan 29 16:46:13 crc kubenswrapper[4895]: I0129 16:46:13.717388 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" podStartSLOduration=2.760326551 podStartE2EDuration="3.717346179s" podCreationTimestamp="2026-01-29 16:46:10 +0000 UTC" firstStartedPulling="2026-01-29 16:46:11.748499419 +0000 UTC m=+2055.551476703" lastFinishedPulling="2026-01-29 16:46:12.705519067 +0000 UTC m=+2056.508496331" observedRunningTime="2026-01-29 16:46:13.699967079 +0000 UTC m=+2057.502944423" watchObservedRunningTime="2026-01-29 16:46:13.717346179 +0000 UTC m=+2057.520323504" Jan 29 16:46:17 crc kubenswrapper[4895]: I0129 16:46:17.717709 4895 generic.go:334] "Generic (PLEG): container finished" podID="76358913-8879-4dd3-8ca1-8dae5ff9a1b2" containerID="4cd308f04a0f081cde556b0d300ce6ec256329a79c6bb3e0335d7d10e0f41f13" exitCode=0 Jan 29 16:46:17 crc kubenswrapper[4895]: I0129 16:46:17.717805 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" event={"ID":"76358913-8879-4dd3-8ca1-8dae5ff9a1b2","Type":"ContainerDied","Data":"4cd308f04a0f081cde556b0d300ce6ec256329a79c6bb3e0335d7d10e0f41f13"} Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.186633 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.260375 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-ssh-key-openstack-edpm-ipam\") pod \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.260448 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-inventory\") pod \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.260678 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t96n\" (UniqueName: \"kubernetes.io/projected/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-kube-api-access-8t96n\") pod \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\" (UID: \"76358913-8879-4dd3-8ca1-8dae5ff9a1b2\") " Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.267070 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-kube-api-access-8t96n" (OuterVolumeSpecName: "kube-api-access-8t96n") pod "76358913-8879-4dd3-8ca1-8dae5ff9a1b2" (UID: "76358913-8879-4dd3-8ca1-8dae5ff9a1b2"). InnerVolumeSpecName "kube-api-access-8t96n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.287806 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-inventory" (OuterVolumeSpecName: "inventory") pod "76358913-8879-4dd3-8ca1-8dae5ff9a1b2" (UID: "76358913-8879-4dd3-8ca1-8dae5ff9a1b2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.289537 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "76358913-8879-4dd3-8ca1-8dae5ff9a1b2" (UID: "76358913-8879-4dd3-8ca1-8dae5ff9a1b2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.362742 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.362789 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t96n\" (UniqueName: \"kubernetes.io/projected/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-kube-api-access-8t96n\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.362802 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76358913-8879-4dd3-8ca1-8dae5ff9a1b2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.744002 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" event={"ID":"76358913-8879-4dd3-8ca1-8dae5ff9a1b2","Type":"ContainerDied","Data":"21156b8d85b8e756c417af7f1e53e9715f39bfc6c73eb813d8509a78110046a1"} Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.744069 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21156b8d85b8e756c417af7f1e53e9715f39bfc6c73eb813d8509a78110046a1" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.744153 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.823430 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk"] Jan 29 16:46:19 crc kubenswrapper[4895]: E0129 16:46:19.825499 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76358913-8879-4dd3-8ca1-8dae5ff9a1b2" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.825549 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="76358913-8879-4dd3-8ca1-8dae5ff9a1b2" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.825963 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="76358913-8879-4dd3-8ca1-8dae5ff9a1b2" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.827606 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.830276 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.830585 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.830798 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.832083 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.839360 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk"] Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.977179 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.977719 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7stj9\" (UniqueName: \"kubernetes.io/projected/d8413a12-38df-4b6d-92d4-5f6750ca05dd-kube-api-access-7stj9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:46:19 crc kubenswrapper[4895]: I0129 16:46:19.977756 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:46:20 crc kubenswrapper[4895]: I0129 16:46:20.079413 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:46:20 crc kubenswrapper[4895]: I0129 16:46:20.079527 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7stj9\" (UniqueName: \"kubernetes.io/projected/d8413a12-38df-4b6d-92d4-5f6750ca05dd-kube-api-access-7stj9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:46:20 crc kubenswrapper[4895]: I0129 16:46:20.079573 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:46:20 crc kubenswrapper[4895]: I0129 16:46:20.087426 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:46:20 crc kubenswrapper[4895]: I0129 16:46:20.087587 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:46:20 crc kubenswrapper[4895]: I0129 16:46:20.104066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7stj9\" (UniqueName: \"kubernetes.io/projected/d8413a12-38df-4b6d-92d4-5f6750ca05dd-kube-api-access-7stj9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:46:20 crc kubenswrapper[4895]: I0129 16:46:20.153741 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:46:20 crc kubenswrapper[4895]: I0129 16:46:20.723688 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk"] Jan 29 16:46:20 crc kubenswrapper[4895]: I0129 16:46:20.755203 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" event={"ID":"d8413a12-38df-4b6d-92d4-5f6750ca05dd","Type":"ContainerStarted","Data":"5760bcff90ef5de8839ee388134d3bbd2d4b06bf43392685833fd6a988fef84a"} Jan 29 16:46:21 crc kubenswrapper[4895]: I0129 16:46:21.768055 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" event={"ID":"d8413a12-38df-4b6d-92d4-5f6750ca05dd","Type":"ContainerStarted","Data":"23b50276999418237f3cac7cfe00240748e884b1fb8b8cc7d17ffaba805b58c9"} Jan 29 16:46:21 crc kubenswrapper[4895]: I0129 16:46:21.799843 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" podStartSLOduration=2.34810364 podStartE2EDuration="2.79981237s" podCreationTimestamp="2026-01-29 16:46:19 +0000 UTC" firstStartedPulling="2026-01-29 16:46:20.735442478 +0000 UTC m=+2064.538419742" lastFinishedPulling="2026-01-29 16:46:21.187151208 +0000 UTC m=+2064.990128472" observedRunningTime="2026-01-29 16:46:21.791974007 +0000 UTC m=+2065.594951271" watchObservedRunningTime="2026-01-29 16:46:21.79981237 +0000 UTC m=+2065.602789634" Jan 29 16:47:18 crc kubenswrapper[4895]: I0129 16:47:18.536906 4895 generic.go:334] "Generic (PLEG): container finished" podID="d8413a12-38df-4b6d-92d4-5f6750ca05dd" containerID="23b50276999418237f3cac7cfe00240748e884b1fb8b8cc7d17ffaba805b58c9" exitCode=0 Jan 29 16:47:18 crc kubenswrapper[4895]: I0129 16:47:18.536992 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" event={"ID":"d8413a12-38df-4b6d-92d4-5f6750ca05dd","Type":"ContainerDied","Data":"23b50276999418237f3cac7cfe00240748e884b1fb8b8cc7d17ffaba805b58c9"} Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.128074 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.281182 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-inventory\") pod \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.281360 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7stj9\" (UniqueName: \"kubernetes.io/projected/d8413a12-38df-4b6d-92d4-5f6750ca05dd-kube-api-access-7stj9\") pod \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.281488 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-ssh-key-openstack-edpm-ipam\") pod \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\" (UID: \"d8413a12-38df-4b6d-92d4-5f6750ca05dd\") " Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.290077 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8413a12-38df-4b6d-92d4-5f6750ca05dd-kube-api-access-7stj9" (OuterVolumeSpecName: "kube-api-access-7stj9") pod "d8413a12-38df-4b6d-92d4-5f6750ca05dd" (UID: "d8413a12-38df-4b6d-92d4-5f6750ca05dd"). InnerVolumeSpecName "kube-api-access-7stj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.323068 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-inventory" (OuterVolumeSpecName: "inventory") pod "d8413a12-38df-4b6d-92d4-5f6750ca05dd" (UID: "d8413a12-38df-4b6d-92d4-5f6750ca05dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.325218 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d8413a12-38df-4b6d-92d4-5f6750ca05dd" (UID: "d8413a12-38df-4b6d-92d4-5f6750ca05dd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.384032 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.384079 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7stj9\" (UniqueName: \"kubernetes.io/projected/d8413a12-38df-4b6d-92d4-5f6750ca05dd-kube-api-access-7stj9\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.384099 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8413a12-38df-4b6d-92d4-5f6750ca05dd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.563302 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" event={"ID":"d8413a12-38df-4b6d-92d4-5f6750ca05dd","Type":"ContainerDied","Data":"5760bcff90ef5de8839ee388134d3bbd2d4b06bf43392685833fd6a988fef84a"} Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.563374 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5760bcff90ef5de8839ee388134d3bbd2d4b06bf43392685833fd6a988fef84a" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.563377 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.657380 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pdtsm"] Jan 29 16:47:20 crc kubenswrapper[4895]: E0129 16:47:20.657843 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8413a12-38df-4b6d-92d4-5f6750ca05dd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.657884 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8413a12-38df-4b6d-92d4-5f6750ca05dd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.658088 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8413a12-38df-4b6d-92d4-5f6750ca05dd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.658770 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.661022 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.661398 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.663102 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.664501 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.673500 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pdtsm"] Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.792543 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pdtsm\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.792631 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pdtsm\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.792930 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bddfq\" (UniqueName: \"kubernetes.io/projected/1ceda164-c8e7-4eb4-8999-082294558365-kube-api-access-bddfq\") pod \"ssh-known-hosts-edpm-deployment-pdtsm\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.895641 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pdtsm\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.896476 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pdtsm\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.896523 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bddfq\" (UniqueName: \"kubernetes.io/projected/1ceda164-c8e7-4eb4-8999-082294558365-kube-api-access-bddfq\") pod \"ssh-known-hosts-edpm-deployment-pdtsm\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.901655 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pdtsm\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.902574 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pdtsm\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.919424 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bddfq\" (UniqueName: \"kubernetes.io/projected/1ceda164-c8e7-4eb4-8999-082294558365-kube-api-access-bddfq\") pod \"ssh-known-hosts-edpm-deployment-pdtsm\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:20 crc kubenswrapper[4895]: I0129 16:47:20.978427 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:21 crc kubenswrapper[4895]: I0129 16:47:21.612976 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pdtsm"] Jan 29 16:47:22 crc kubenswrapper[4895]: I0129 16:47:22.585835 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" event={"ID":"1ceda164-c8e7-4eb4-8999-082294558365","Type":"ContainerStarted","Data":"7e05ca6f7763fc14ab8bacb5ae057612a39905bf5f524fcf5278e8a64b11a366"} Jan 29 16:47:22 crc kubenswrapper[4895]: I0129 16:47:22.586321 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" event={"ID":"1ceda164-c8e7-4eb4-8999-082294558365","Type":"ContainerStarted","Data":"ed6992daaa265244422d21957ba845c00390900170d9e18c7df15d93ee003f05"} Jan 29 16:47:22 crc kubenswrapper[4895]: I0129 16:47:22.610254 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" podStartSLOduration=2.15165268 podStartE2EDuration="2.610226845s" podCreationTimestamp="2026-01-29 16:47:20 +0000 UTC" firstStartedPulling="2026-01-29 16:47:21.639470634 +0000 UTC m=+2125.442447918" lastFinishedPulling="2026-01-29 16:47:22.098044829 +0000 UTC m=+2125.901022083" observedRunningTime="2026-01-29 16:47:22.603408201 +0000 UTC m=+2126.406385465" watchObservedRunningTime="2026-01-29 16:47:22.610226845 +0000 UTC m=+2126.413204109" Jan 29 16:47:27 crc kubenswrapper[4895]: I0129 16:47:27.823527 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:47:27 crc kubenswrapper[4895]: I0129 16:47:27.824489 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:47:30 crc kubenswrapper[4895]: I0129 16:47:30.667228 4895 generic.go:334] "Generic (PLEG): container finished" podID="1ceda164-c8e7-4eb4-8999-082294558365" containerID="7e05ca6f7763fc14ab8bacb5ae057612a39905bf5f524fcf5278e8a64b11a366" exitCode=0 Jan 29 16:47:30 crc kubenswrapper[4895]: I0129 16:47:30.667331 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" event={"ID":"1ceda164-c8e7-4eb4-8999-082294558365","Type":"ContainerDied","Data":"7e05ca6f7763fc14ab8bacb5ae057612a39905bf5f524fcf5278e8a64b11a366"} Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.207906 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.351106 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bddfq\" (UniqueName: \"kubernetes.io/projected/1ceda164-c8e7-4eb4-8999-082294558365-kube-api-access-bddfq\") pod \"1ceda164-c8e7-4eb4-8999-082294558365\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.351207 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-inventory-0\") pod \"1ceda164-c8e7-4eb4-8999-082294558365\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.351497 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-ssh-key-openstack-edpm-ipam\") pod \"1ceda164-c8e7-4eb4-8999-082294558365\" (UID: \"1ceda164-c8e7-4eb4-8999-082294558365\") " Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.360083 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ceda164-c8e7-4eb4-8999-082294558365-kube-api-access-bddfq" (OuterVolumeSpecName: "kube-api-access-bddfq") pod "1ceda164-c8e7-4eb4-8999-082294558365" (UID: "1ceda164-c8e7-4eb4-8999-082294558365"). InnerVolumeSpecName "kube-api-access-bddfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.387155 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1ceda164-c8e7-4eb4-8999-082294558365" (UID: "1ceda164-c8e7-4eb4-8999-082294558365"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.392761 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1ceda164-c8e7-4eb4-8999-082294558365" (UID: "1ceda164-c8e7-4eb4-8999-082294558365"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.453439 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bddfq\" (UniqueName: \"kubernetes.io/projected/1ceda164-c8e7-4eb4-8999-082294558365-kube-api-access-bddfq\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.453706 4895 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.453777 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ceda164-c8e7-4eb4-8999-082294558365-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.692575 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" event={"ID":"1ceda164-c8e7-4eb4-8999-082294558365","Type":"ContainerDied","Data":"ed6992daaa265244422d21957ba845c00390900170d9e18c7df15d93ee003f05"} Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.692641 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed6992daaa265244422d21957ba845c00390900170d9e18c7df15d93ee003f05" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.692646 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pdtsm" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.767428 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl"] Jan 29 16:47:32 crc kubenswrapper[4895]: E0129 16:47:32.768035 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ceda164-c8e7-4eb4-8999-082294558365" containerName="ssh-known-hosts-edpm-deployment" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.768061 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ceda164-c8e7-4eb4-8999-082294558365" containerName="ssh-known-hosts-edpm-deployment" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.768304 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ceda164-c8e7-4eb4-8999-082294558365" containerName="ssh-known-hosts-edpm-deployment" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.769222 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.776683 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.777180 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.777253 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.777375 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.792959 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl"] Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.863195 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9pqwl\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.863276 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9pqwl\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.863404 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67gx\" (UniqueName: \"kubernetes.io/projected/795c63db-36bb-49ad-9f75-f963d9c19ee9-kube-api-access-l67gx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9pqwl\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.966345 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l67gx\" (UniqueName: \"kubernetes.io/projected/795c63db-36bb-49ad-9f75-f963d9c19ee9-kube-api-access-l67gx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9pqwl\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.966528 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9pqwl\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.966626 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9pqwl\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.973245 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9pqwl\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:32 crc kubenswrapper[4895]: I0129 16:47:32.973407 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9pqwl\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:33 crc kubenswrapper[4895]: I0129 16:47:33.006819 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l67gx\" (UniqueName: \"kubernetes.io/projected/795c63db-36bb-49ad-9f75-f963d9c19ee9-kube-api-access-l67gx\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9pqwl\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:33 crc kubenswrapper[4895]: I0129 16:47:33.096522 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:33 crc kubenswrapper[4895]: I0129 16:47:33.701202 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl"] Jan 29 16:47:33 crc kubenswrapper[4895]: W0129 16:47:33.707144 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795c63db_36bb_49ad_9f75_f963d9c19ee9.slice/crio-12cc9f026c31bf1c2f76f55b0a15d886705df40d468b22c5cd2f4dfd8df0ba68 WatchSource:0}: Error finding container 12cc9f026c31bf1c2f76f55b0a15d886705df40d468b22c5cd2f4dfd8df0ba68: Status 404 returned error can't find the container with id 12cc9f026c31bf1c2f76f55b0a15d886705df40d468b22c5cd2f4dfd8df0ba68 Jan 29 16:47:34 crc kubenswrapper[4895]: I0129 16:47:34.714691 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" event={"ID":"795c63db-36bb-49ad-9f75-f963d9c19ee9","Type":"ContainerStarted","Data":"12cc9f026c31bf1c2f76f55b0a15d886705df40d468b22c5cd2f4dfd8df0ba68"} Jan 29 16:47:35 crc kubenswrapper[4895]: I0129 16:47:35.727845 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" event={"ID":"795c63db-36bb-49ad-9f75-f963d9c19ee9","Type":"ContainerStarted","Data":"67b7f2cb442b0662c90f2a02ab4040b1db981136d5e369e078486be35aca5aef"} Jan 29 16:47:35 crc kubenswrapper[4895]: I0129 16:47:35.745820 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" podStartSLOduration=2.124087592 podStartE2EDuration="3.745799946s" podCreationTimestamp="2026-01-29 16:47:32 +0000 UTC" firstStartedPulling="2026-01-29 16:47:33.711782463 +0000 UTC m=+2137.514759727" lastFinishedPulling="2026-01-29 16:47:35.333494817 +0000 UTC m=+2139.136472081" observedRunningTime="2026-01-29 16:47:35.742650071 +0000 UTC m=+2139.545627345" watchObservedRunningTime="2026-01-29 16:47:35.745799946 +0000 UTC m=+2139.548777210" Jan 29 16:47:43 crc kubenswrapper[4895]: I0129 16:47:43.821316 4895 generic.go:334] "Generic (PLEG): container finished" podID="795c63db-36bb-49ad-9f75-f963d9c19ee9" containerID="67b7f2cb442b0662c90f2a02ab4040b1db981136d5e369e078486be35aca5aef" exitCode=0 Jan 29 16:47:43 crc kubenswrapper[4895]: I0129 16:47:43.821426 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" event={"ID":"795c63db-36bb-49ad-9f75-f963d9c19ee9","Type":"ContainerDied","Data":"67b7f2cb442b0662c90f2a02ab4040b1db981136d5e369e078486be35aca5aef"} Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.324271 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.468333 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-inventory\") pod \"795c63db-36bb-49ad-9f75-f963d9c19ee9\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.468717 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-ssh-key-openstack-edpm-ipam\") pod \"795c63db-36bb-49ad-9f75-f963d9c19ee9\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.469321 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l67gx\" (UniqueName: \"kubernetes.io/projected/795c63db-36bb-49ad-9f75-f963d9c19ee9-kube-api-access-l67gx\") pod \"795c63db-36bb-49ad-9f75-f963d9c19ee9\" (UID: \"795c63db-36bb-49ad-9f75-f963d9c19ee9\") " Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.476331 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795c63db-36bb-49ad-9f75-f963d9c19ee9-kube-api-access-l67gx" (OuterVolumeSpecName: "kube-api-access-l67gx") pod "795c63db-36bb-49ad-9f75-f963d9c19ee9" (UID: "795c63db-36bb-49ad-9f75-f963d9c19ee9"). InnerVolumeSpecName "kube-api-access-l67gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.497653 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "795c63db-36bb-49ad-9f75-f963d9c19ee9" (UID: "795c63db-36bb-49ad-9f75-f963d9c19ee9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.501528 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-inventory" (OuterVolumeSpecName: "inventory") pod "795c63db-36bb-49ad-9f75-f963d9c19ee9" (UID: "795c63db-36bb-49ad-9f75-f963d9c19ee9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.571610 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l67gx\" (UniqueName: \"kubernetes.io/projected/795c63db-36bb-49ad-9f75-f963d9c19ee9-kube-api-access-l67gx\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.571661 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.571673 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/795c63db-36bb-49ad-9f75-f963d9c19ee9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.846124 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" event={"ID":"795c63db-36bb-49ad-9f75-f963d9c19ee9","Type":"ContainerDied","Data":"12cc9f026c31bf1c2f76f55b0a15d886705df40d468b22c5cd2f4dfd8df0ba68"} Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.846192 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12cc9f026c31bf1c2f76f55b0a15d886705df40d468b22c5cd2f4dfd8df0ba68" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.846213 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.924087 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28"] Jan 29 16:47:45 crc kubenswrapper[4895]: E0129 16:47:45.924711 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795c63db-36bb-49ad-9f75-f963d9c19ee9" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.924745 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="795c63db-36bb-49ad-9f75-f963d9c19ee9" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.925022 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="795c63db-36bb-49ad-9f75-f963d9c19ee9" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.925986 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.929121 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.929446 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.929909 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.929988 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:47:45 crc kubenswrapper[4895]: I0129 16:47:45.939427 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28"] Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.082612 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mr4\" (UniqueName: \"kubernetes.io/projected/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-kube-api-access-f2mr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.082704 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.083064 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.185014 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mr4\" (UniqueName: \"kubernetes.io/projected/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-kube-api-access-f2mr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.185119 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.185193 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.190243 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.190322 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.208779 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mr4\" (UniqueName: \"kubernetes.io/projected/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-kube-api-access-f2mr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.254595 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.818305 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28"] Jan 29 16:47:46 crc kubenswrapper[4895]: I0129 16:47:46.857659 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" event={"ID":"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad","Type":"ContainerStarted","Data":"163bd60a1658ad0a10e4e13abdda958d7e692073b3eed1601326404198a32e06"} Jan 29 16:47:47 crc kubenswrapper[4895]: I0129 16:47:47.869388 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" event={"ID":"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad","Type":"ContainerStarted","Data":"659a015f30f5b7959443a32962882e55991bcd51f4537d24040c3c0e332807d7"} Jan 29 16:47:47 crc kubenswrapper[4895]: I0129 16:47:47.898757 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" podStartSLOduration=2.234603027 podStartE2EDuration="2.898719664s" podCreationTimestamp="2026-01-29 16:47:45 +0000 UTC" firstStartedPulling="2026-01-29 16:47:46.810331046 +0000 UTC m=+2150.613308310" lastFinishedPulling="2026-01-29 16:47:47.474447683 +0000 UTC m=+2151.277424947" observedRunningTime="2026-01-29 16:47:47.884794979 +0000 UTC m=+2151.687772243" watchObservedRunningTime="2026-01-29 16:47:47.898719664 +0000 UTC m=+2151.701696968" Jan 29 16:47:57 crc kubenswrapper[4895]: I0129 16:47:57.823759 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:47:57 crc kubenswrapper[4895]: I0129 16:47:57.824540 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:47:59 crc kubenswrapper[4895]: I0129 16:47:59.998335 4895 generic.go:334] "Generic (PLEG): container finished" podID="48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad" containerID="659a015f30f5b7959443a32962882e55991bcd51f4537d24040c3c0e332807d7" exitCode=0 Jan 29 16:47:59 crc kubenswrapper[4895]: I0129 16:47:59.998421 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" event={"ID":"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad","Type":"ContainerDied","Data":"659a015f30f5b7959443a32962882e55991bcd51f4537d24040c3c0e332807d7"} Jan 29 16:48:01 crc kubenswrapper[4895]: I0129 16:48:01.580668 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:48:01 crc kubenswrapper[4895]: I0129 16:48:01.645953 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2mr4\" (UniqueName: \"kubernetes.io/projected/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-kube-api-access-f2mr4\") pod \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " Jan 29 16:48:01 crc kubenswrapper[4895]: I0129 16:48:01.646018 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-ssh-key-openstack-edpm-ipam\") pod \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " Jan 29 16:48:01 crc kubenswrapper[4895]: I0129 16:48:01.646232 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-inventory\") pod \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\" (UID: \"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad\") " Jan 29 16:48:01 crc kubenswrapper[4895]: I0129 16:48:01.653203 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-kube-api-access-f2mr4" (OuterVolumeSpecName: "kube-api-access-f2mr4") pod "48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad" (UID: "48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad"). InnerVolumeSpecName "kube-api-access-f2mr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:48:01 crc kubenswrapper[4895]: I0129 16:48:01.679038 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-inventory" (OuterVolumeSpecName: "inventory") pod "48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad" (UID: "48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:48:01 crc kubenswrapper[4895]: I0129 16:48:01.692631 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad" (UID: "48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:48:01 crc kubenswrapper[4895]: I0129 16:48:01.750263 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:01 crc kubenswrapper[4895]: I0129 16:48:01.750307 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2mr4\" (UniqueName: \"kubernetes.io/projected/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-kube-api-access-f2mr4\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:01 crc kubenswrapper[4895]: I0129 16:48:01.750322 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:02 crc kubenswrapper[4895]: I0129 16:48:02.026687 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" event={"ID":"48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad","Type":"ContainerDied","Data":"163bd60a1658ad0a10e4e13abdda958d7e692073b3eed1601326404198a32e06"} Jan 29 16:48:02 crc kubenswrapper[4895]: I0129 16:48:02.027020 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="163bd60a1658ad0a10e4e13abdda958d7e692073b3eed1601326404198a32e06" Jan 29 16:48:02 crc kubenswrapper[4895]: I0129 16:48:02.026839 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.522532 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qhnp4"] Jan 29 16:48:18 crc kubenswrapper[4895]: E0129 16:48:18.524043 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.524067 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.524325 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.526145 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.538651 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhnp4"] Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.641157 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-catalog-content\") pod \"redhat-marketplace-qhnp4\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.641260 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9llqf\" (UniqueName: \"kubernetes.io/projected/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-kube-api-access-9llqf\") pod \"redhat-marketplace-qhnp4\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.641591 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-utilities\") pod \"redhat-marketplace-qhnp4\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.743929 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-catalog-content\") pod \"redhat-marketplace-qhnp4\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.744064 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9llqf\" (UniqueName: \"kubernetes.io/projected/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-kube-api-access-9llqf\") pod \"redhat-marketplace-qhnp4\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.744098 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-utilities\") pod \"redhat-marketplace-qhnp4\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.744767 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-catalog-content\") pod \"redhat-marketplace-qhnp4\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.744883 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-utilities\") pod \"redhat-marketplace-qhnp4\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.771628 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9llqf\" (UniqueName: \"kubernetes.io/projected/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-kube-api-access-9llqf\") pod \"redhat-marketplace-qhnp4\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:18 crc kubenswrapper[4895]: I0129 16:48:18.871119 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:19 crc kubenswrapper[4895]: I0129 16:48:19.358086 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhnp4"] Jan 29 16:48:20 crc kubenswrapper[4895]: I0129 16:48:20.215919 4895 generic.go:334] "Generic (PLEG): container finished" podID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" containerID="6058dc3ecad95049d2553743444132b47844570e671354bf765d52605819a263" exitCode=0 Jan 29 16:48:20 crc kubenswrapper[4895]: I0129 16:48:20.216142 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhnp4" event={"ID":"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62","Type":"ContainerDied","Data":"6058dc3ecad95049d2553743444132b47844570e671354bf765d52605819a263"} Jan 29 16:48:20 crc kubenswrapper[4895]: I0129 16:48:20.216401 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhnp4" event={"ID":"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62","Type":"ContainerStarted","Data":"4fa76a6193fdc622b8bc927191729958cb3439d943231c8ee6f3d018d4ee6d61"} Jan 29 16:48:22 crc kubenswrapper[4895]: I0129 16:48:22.238014 4895 generic.go:334] "Generic (PLEG): container finished" podID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" containerID="ad4b5462db4880e439486ddc8cb20a3f6986c6bf69218aee2a7eb7815daf9381" exitCode=0 Jan 29 16:48:22 crc kubenswrapper[4895]: I0129 16:48:22.238134 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhnp4" event={"ID":"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62","Type":"ContainerDied","Data":"ad4b5462db4880e439486ddc8cb20a3f6986c6bf69218aee2a7eb7815daf9381"} Jan 29 16:48:23 crc kubenswrapper[4895]: I0129 16:48:23.889594 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gh7z8"] Jan 29 16:48:23 crc kubenswrapper[4895]: I0129 16:48:23.893465 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:23 crc kubenswrapper[4895]: I0129 16:48:23.956753 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gh7z8"] Jan 29 16:48:23 crc kubenswrapper[4895]: I0129 16:48:23.964731 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-catalog-content\") pod \"redhat-operators-gh7z8\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:23 crc kubenswrapper[4895]: I0129 16:48:23.965122 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9r8m\" (UniqueName: \"kubernetes.io/projected/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-kube-api-access-c9r8m\") pod \"redhat-operators-gh7z8\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:23 crc kubenswrapper[4895]: I0129 16:48:23.965355 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-utilities\") pod \"redhat-operators-gh7z8\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:24 crc kubenswrapper[4895]: I0129 16:48:24.067586 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-utilities\") pod \"redhat-operators-gh7z8\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:24 crc kubenswrapper[4895]: I0129 16:48:24.067747 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-catalog-content\") pod \"redhat-operators-gh7z8\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:24 crc kubenswrapper[4895]: I0129 16:48:24.067917 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9r8m\" (UniqueName: \"kubernetes.io/projected/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-kube-api-access-c9r8m\") pod \"redhat-operators-gh7z8\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:24 crc kubenswrapper[4895]: I0129 16:48:24.068405 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-utilities\") pod \"redhat-operators-gh7z8\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:24 crc kubenswrapper[4895]: I0129 16:48:24.068509 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-catalog-content\") pod \"redhat-operators-gh7z8\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:24 crc kubenswrapper[4895]: I0129 16:48:24.096063 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9r8m\" (UniqueName: \"kubernetes.io/projected/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-kube-api-access-c9r8m\") pod \"redhat-operators-gh7z8\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:24 crc kubenswrapper[4895]: I0129 16:48:24.220142 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:24 crc kubenswrapper[4895]: I0129 16:48:24.286261 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhnp4" event={"ID":"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62","Type":"ContainerStarted","Data":"ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8"} Jan 29 16:48:24 crc kubenswrapper[4895]: I0129 16:48:24.313919 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qhnp4" podStartSLOduration=3.316051709 podStartE2EDuration="6.313892636s" podCreationTimestamp="2026-01-29 16:48:18 +0000 UTC" firstStartedPulling="2026-01-29 16:48:20.219559373 +0000 UTC m=+2184.022536637" lastFinishedPulling="2026-01-29 16:48:23.2174003 +0000 UTC m=+2187.020377564" observedRunningTime="2026-01-29 16:48:24.306235729 +0000 UTC m=+2188.109213013" watchObservedRunningTime="2026-01-29 16:48:24.313892636 +0000 UTC m=+2188.116869910" Jan 29 16:48:24 crc kubenswrapper[4895]: I0129 16:48:24.741787 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gh7z8"] Jan 29 16:48:25 crc kubenswrapper[4895]: I0129 16:48:25.297936 4895 generic.go:334] "Generic (PLEG): container finished" podID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerID="3fc38c66bf5eb02ddb2acdcdfbb7678957b2c661bc64d1a14ba7ea4ef30929bb" exitCode=0 Jan 29 16:48:25 crc kubenswrapper[4895]: I0129 16:48:25.298064 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh7z8" event={"ID":"2f39f0a6-0d3b-48c8-8e15-682583a6afe0","Type":"ContainerDied","Data":"3fc38c66bf5eb02ddb2acdcdfbb7678957b2c661bc64d1a14ba7ea4ef30929bb"} Jan 29 16:48:25 crc kubenswrapper[4895]: I0129 16:48:25.298134 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh7z8" event={"ID":"2f39f0a6-0d3b-48c8-8e15-682583a6afe0","Type":"ContainerStarted","Data":"534bb4a14a35d6f77eb3426acffd1011b5e5cc6bbbe3f8cbd40740f18af29b4e"} Jan 29 16:48:27 crc kubenswrapper[4895]: I0129 16:48:27.326255 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh7z8" event={"ID":"2f39f0a6-0d3b-48c8-8e15-682583a6afe0","Type":"ContainerStarted","Data":"176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb"} Jan 29 16:48:27 crc kubenswrapper[4895]: I0129 16:48:27.822918 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:48:27 crc kubenswrapper[4895]: I0129 16:48:27.822985 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:48:27 crc kubenswrapper[4895]: I0129 16:48:27.823040 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:48:27 crc kubenswrapper[4895]: I0129 16:48:27.824999 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3e768471e09a634e8a9a0e0fd365a02d913a3c9ced9b63c4b19dc39c06cee68"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:48:27 crc kubenswrapper[4895]: I0129 16:48:27.825071 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://d3e768471e09a634e8a9a0e0fd365a02d913a3c9ced9b63c4b19dc39c06cee68" gracePeriod=600 Jan 29 16:48:28 crc kubenswrapper[4895]: I0129 16:48:28.871710 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:28 crc kubenswrapper[4895]: I0129 16:48:28.872175 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:28 crc kubenswrapper[4895]: I0129 16:48:28.928356 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:29 crc kubenswrapper[4895]: I0129 16:48:29.407743 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:29 crc kubenswrapper[4895]: I0129 16:48:29.900441 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-868pk"] Jan 29 16:48:29 crc kubenswrapper[4895]: I0129 16:48:29.903742 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:29 crc kubenswrapper[4895]: I0129 16:48:29.919776 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-868pk"] Jan 29 16:48:30 crc kubenswrapper[4895]: I0129 16:48:30.027656 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-utilities\") pod \"community-operators-868pk\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:30 crc kubenswrapper[4895]: I0129 16:48:30.028079 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6zv\" (UniqueName: \"kubernetes.io/projected/abddb8da-df4e-4e84-806a-74a8d30fdfe4-kube-api-access-wf6zv\") pod \"community-operators-868pk\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:30 crc kubenswrapper[4895]: I0129 16:48:30.028180 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-catalog-content\") pod \"community-operators-868pk\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:30 crc kubenswrapper[4895]: I0129 16:48:30.131071 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-catalog-content\") pod \"community-operators-868pk\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:30 crc kubenswrapper[4895]: I0129 16:48:30.131689 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-catalog-content\") pod \"community-operators-868pk\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:30 crc kubenswrapper[4895]: I0129 16:48:30.132163 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-utilities\") pod \"community-operators-868pk\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:30 crc kubenswrapper[4895]: I0129 16:48:30.132321 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6zv\" (UniqueName: \"kubernetes.io/projected/abddb8da-df4e-4e84-806a-74a8d30fdfe4-kube-api-access-wf6zv\") pod \"community-operators-868pk\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:30 crc kubenswrapper[4895]: I0129 16:48:30.133314 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-utilities\") pod \"community-operators-868pk\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:30 crc kubenswrapper[4895]: I0129 16:48:30.156929 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6zv\" (UniqueName: \"kubernetes.io/projected/abddb8da-df4e-4e84-806a-74a8d30fdfe4-kube-api-access-wf6zv\") pod \"community-operators-868pk\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:30 crc kubenswrapper[4895]: I0129 16:48:30.246698 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:30 crc kubenswrapper[4895]: I0129 16:48:30.766563 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-868pk"] Jan 29 16:48:30 crc kubenswrapper[4895]: W0129 16:48:30.769778 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabddb8da_df4e_4e84_806a_74a8d30fdfe4.slice/crio-25a48874dbf27db0895238083c56ed103a9ef34db401434335fc45737ce3c3e1 WatchSource:0}: Error finding container 25a48874dbf27db0895238083c56ed103a9ef34db401434335fc45737ce3c3e1: Status 404 returned error can't find the container with id 25a48874dbf27db0895238083c56ed103a9ef34db401434335fc45737ce3c3e1 Jan 29 16:48:31 crc kubenswrapper[4895]: I0129 16:48:31.374666 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-868pk" event={"ID":"abddb8da-df4e-4e84-806a-74a8d30fdfe4","Type":"ContainerStarted","Data":"25a48874dbf27db0895238083c56ed103a9ef34db401434335fc45737ce3c3e1"} Jan 29 16:48:31 crc kubenswrapper[4895]: E0129 16:48:31.560179 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f39f0a6_0d3b_48c8_8e15_682583a6afe0.slice/crio-conmon-176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:48:32 crc kubenswrapper[4895]: I0129 16:48:32.281298 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhnp4"] Jan 29 16:48:32 crc kubenswrapper[4895]: I0129 16:48:32.282531 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qhnp4" podUID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" containerName="registry-server" containerID="cri-o://ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8" gracePeriod=2 Jan 29 16:48:32 crc kubenswrapper[4895]: I0129 16:48:32.390653 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="d3e768471e09a634e8a9a0e0fd365a02d913a3c9ced9b63c4b19dc39c06cee68" exitCode=0 Jan 29 16:48:32 crc kubenswrapper[4895]: I0129 16:48:32.390710 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"d3e768471e09a634e8a9a0e0fd365a02d913a3c9ced9b63c4b19dc39c06cee68"} Jan 29 16:48:32 crc kubenswrapper[4895]: I0129 16:48:32.390803 4895 scope.go:117] "RemoveContainer" containerID="94ee363a25ad74ee8018c7a86fa28310e156106cfb574b55497f7e20d6135230" Jan 29 16:48:32 crc kubenswrapper[4895]: I0129 16:48:32.395286 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-868pk" event={"ID":"abddb8da-df4e-4e84-806a-74a8d30fdfe4","Type":"ContainerStarted","Data":"c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923"} Jan 29 16:48:32 crc kubenswrapper[4895]: I0129 16:48:32.401715 4895 generic.go:334] "Generic (PLEG): container finished" podID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerID="176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb" exitCode=0 Jan 29 16:48:32 crc kubenswrapper[4895]: I0129 16:48:32.401796 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh7z8" event={"ID":"2f39f0a6-0d3b-48c8-8e15-682583a6afe0","Type":"ContainerDied","Data":"176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb"} Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.297880 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.419101 4895 generic.go:334] "Generic (PLEG): container finished" podID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" containerID="c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923" exitCode=0 Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.419239 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-868pk" event={"ID":"abddb8da-df4e-4e84-806a-74a8d30fdfe4","Type":"ContainerDied","Data":"c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923"} Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.419360 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-utilities\") pod \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.420337 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-catalog-content\") pod \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.420386 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9llqf\" (UniqueName: \"kubernetes.io/projected/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-kube-api-access-9llqf\") pod \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\" (UID: \"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62\") " Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.422057 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-utilities" (OuterVolumeSpecName: "utilities") pod "b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" (UID: "b5f4fef9-328a-4f4e-9cd9-547ffb15ab62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.435972 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617"} Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.438500 4895 generic.go:334] "Generic (PLEG): container finished" podID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" containerID="ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8" exitCode=0 Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.438551 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhnp4" event={"ID":"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62","Type":"ContainerDied","Data":"ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8"} Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.438569 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhnp4" event={"ID":"b5f4fef9-328a-4f4e-9cd9-547ffb15ab62","Type":"ContainerDied","Data":"4fa76a6193fdc622b8bc927191729958cb3439d943231c8ee6f3d018d4ee6d61"} Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.438591 4895 scope.go:117] "RemoveContainer" containerID="ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.438799 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhnp4" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.443505 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" (UID: "b5f4fef9-328a-4f4e-9cd9-547ffb15ab62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.447289 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-kube-api-access-9llqf" (OuterVolumeSpecName: "kube-api-access-9llqf") pod "b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" (UID: "b5f4fef9-328a-4f4e-9cd9-547ffb15ab62"). InnerVolumeSpecName "kube-api-access-9llqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.462759 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh7z8" event={"ID":"2f39f0a6-0d3b-48c8-8e15-682583a6afe0","Type":"ContainerStarted","Data":"08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6"} Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.506954 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gh7z8" podStartSLOduration=2.983329435 podStartE2EDuration="10.506931119s" podCreationTimestamp="2026-01-29 16:48:23 +0000 UTC" firstStartedPulling="2026-01-29 16:48:25.300478623 +0000 UTC m=+2189.103455887" lastFinishedPulling="2026-01-29 16:48:32.824080297 +0000 UTC m=+2196.627057571" observedRunningTime="2026-01-29 16:48:33.504582155 +0000 UTC m=+2197.307559419" watchObservedRunningTime="2026-01-29 16:48:33.506931119 +0000 UTC m=+2197.309908383" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.513076 4895 scope.go:117] "RemoveContainer" containerID="ad4b5462db4880e439486ddc8cb20a3f6986c6bf69218aee2a7eb7815daf9381" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.525937 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.525977 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.525993 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9llqf\" (UniqueName: \"kubernetes.io/projected/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62-kube-api-access-9llqf\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.551397 4895 scope.go:117] "RemoveContainer" containerID="6058dc3ecad95049d2553743444132b47844570e671354bf765d52605819a263" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.579587 4895 scope.go:117] "RemoveContainer" containerID="ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8" Jan 29 16:48:33 crc kubenswrapper[4895]: E0129 16:48:33.580265 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8\": container with ID starting with ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8 not found: ID does not exist" containerID="ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.580321 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8"} err="failed to get container status \"ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8\": rpc error: code = NotFound desc = could not find container \"ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8\": container with ID starting with ca683c35aece6fdb9c9ae9c441d6a9c40571eecb73b3b0d8ee0e875bb34cc2a8 not found: ID does not exist" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.580364 4895 scope.go:117] "RemoveContainer" containerID="ad4b5462db4880e439486ddc8cb20a3f6986c6bf69218aee2a7eb7815daf9381" Jan 29 16:48:33 crc kubenswrapper[4895]: E0129 16:48:33.580919 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad4b5462db4880e439486ddc8cb20a3f6986c6bf69218aee2a7eb7815daf9381\": container with ID starting with ad4b5462db4880e439486ddc8cb20a3f6986c6bf69218aee2a7eb7815daf9381 not found: ID does not exist" containerID="ad4b5462db4880e439486ddc8cb20a3f6986c6bf69218aee2a7eb7815daf9381" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.580953 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad4b5462db4880e439486ddc8cb20a3f6986c6bf69218aee2a7eb7815daf9381"} err="failed to get container status \"ad4b5462db4880e439486ddc8cb20a3f6986c6bf69218aee2a7eb7815daf9381\": rpc error: code = NotFound desc = could not find container \"ad4b5462db4880e439486ddc8cb20a3f6986c6bf69218aee2a7eb7815daf9381\": container with ID starting with ad4b5462db4880e439486ddc8cb20a3f6986c6bf69218aee2a7eb7815daf9381 not found: ID does not exist" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.580968 4895 scope.go:117] "RemoveContainer" containerID="6058dc3ecad95049d2553743444132b47844570e671354bf765d52605819a263" Jan 29 16:48:33 crc kubenswrapper[4895]: E0129 16:48:33.581297 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6058dc3ecad95049d2553743444132b47844570e671354bf765d52605819a263\": container with ID starting with 6058dc3ecad95049d2553743444132b47844570e671354bf765d52605819a263 not found: ID does not exist" containerID="6058dc3ecad95049d2553743444132b47844570e671354bf765d52605819a263" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.581340 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6058dc3ecad95049d2553743444132b47844570e671354bf765d52605819a263"} err="failed to get container status \"6058dc3ecad95049d2553743444132b47844570e671354bf765d52605819a263\": rpc error: code = NotFound desc = could not find container \"6058dc3ecad95049d2553743444132b47844570e671354bf765d52605819a263\": container with ID starting with 6058dc3ecad95049d2553743444132b47844570e671354bf765d52605819a263 not found: ID does not exist" Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.779074 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhnp4"] Jan 29 16:48:33 crc kubenswrapper[4895]: I0129 16:48:33.790028 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhnp4"] Jan 29 16:48:34 crc kubenswrapper[4895]: I0129 16:48:34.220645 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:34 crc kubenswrapper[4895]: I0129 16:48:34.221242 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:48:35 crc kubenswrapper[4895]: I0129 16:48:35.053163 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" path="/var/lib/kubelet/pods/b5f4fef9-328a-4f4e-9cd9-547ffb15ab62/volumes" Jan 29 16:48:35 crc kubenswrapper[4895]: I0129 16:48:35.270636 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gh7z8" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerName="registry-server" probeResult="failure" output=< Jan 29 16:48:35 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 16:48:35 crc kubenswrapper[4895]: > Jan 29 16:48:35 crc kubenswrapper[4895]: I0129 16:48:35.486170 4895 generic.go:334] "Generic (PLEG): container finished" podID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" containerID="edb8aaef6c7ece9844b533463fc6c866cf28a1bf978fc567c760c81dc1d64540" exitCode=0 Jan 29 16:48:35 crc kubenswrapper[4895]: I0129 16:48:35.486229 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-868pk" event={"ID":"abddb8da-df4e-4e84-806a-74a8d30fdfe4","Type":"ContainerDied","Data":"edb8aaef6c7ece9844b533463fc6c866cf28a1bf978fc567c760c81dc1d64540"} Jan 29 16:48:36 crc kubenswrapper[4895]: I0129 16:48:36.497882 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-868pk" event={"ID":"abddb8da-df4e-4e84-806a-74a8d30fdfe4","Type":"ContainerStarted","Data":"250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066"} Jan 29 16:48:36 crc kubenswrapper[4895]: I0129 16:48:36.521032 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-868pk" podStartSLOduration=4.845636226 podStartE2EDuration="7.521001573s" podCreationTimestamp="2026-01-29 16:48:29 +0000 UTC" firstStartedPulling="2026-01-29 16:48:33.423234645 +0000 UTC m=+2197.226211909" lastFinishedPulling="2026-01-29 16:48:36.098599982 +0000 UTC m=+2199.901577256" observedRunningTime="2026-01-29 16:48:36.515673939 +0000 UTC m=+2200.318651233" watchObservedRunningTime="2026-01-29 16:48:36.521001573 +0000 UTC m=+2200.323978867" Jan 29 16:48:40 crc kubenswrapper[4895]: I0129 16:48:40.254451 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:40 crc kubenswrapper[4895]: I0129 16:48:40.255487 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:40 crc kubenswrapper[4895]: I0129 16:48:40.325928 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:45 crc kubenswrapper[4895]: I0129 16:48:45.290834 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gh7z8" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerName="registry-server" probeResult="failure" output=< Jan 29 16:48:45 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 16:48:45 crc kubenswrapper[4895]: > Jan 29 16:48:50 crc kubenswrapper[4895]: I0129 16:48:50.314860 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:50 crc kubenswrapper[4895]: I0129 16:48:50.375465 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-868pk"] Jan 29 16:48:51 crc kubenswrapper[4895]: I0129 16:48:51.302303 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-868pk" podUID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" containerName="registry-server" containerID="cri-o://250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066" gracePeriod=2 Jan 29 16:48:51 crc kubenswrapper[4895]: I0129 16:48:51.781431 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:51 crc kubenswrapper[4895]: I0129 16:48:51.823049 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-catalog-content\") pod \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " Jan 29 16:48:51 crc kubenswrapper[4895]: I0129 16:48:51.823128 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6zv\" (UniqueName: \"kubernetes.io/projected/abddb8da-df4e-4e84-806a-74a8d30fdfe4-kube-api-access-wf6zv\") pod \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " Jan 29 16:48:51 crc kubenswrapper[4895]: I0129 16:48:51.823196 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-utilities\") pod \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\" (UID: \"abddb8da-df4e-4e84-806a-74a8d30fdfe4\") " Jan 29 16:48:51 crc kubenswrapper[4895]: I0129 16:48:51.824368 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-utilities" (OuterVolumeSpecName: "utilities") pod "abddb8da-df4e-4e84-806a-74a8d30fdfe4" (UID: "abddb8da-df4e-4e84-806a-74a8d30fdfe4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:51 crc kubenswrapper[4895]: I0129 16:48:51.830794 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abddb8da-df4e-4e84-806a-74a8d30fdfe4-kube-api-access-wf6zv" (OuterVolumeSpecName: "kube-api-access-wf6zv") pod "abddb8da-df4e-4e84-806a-74a8d30fdfe4" (UID: "abddb8da-df4e-4e84-806a-74a8d30fdfe4"). InnerVolumeSpecName "kube-api-access-wf6zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:48:51 crc kubenswrapper[4895]: I0129 16:48:51.890026 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abddb8da-df4e-4e84-806a-74a8d30fdfe4" (UID: "abddb8da-df4e-4e84-806a-74a8d30fdfe4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:51 crc kubenswrapper[4895]: I0129 16:48:51.925690 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:51 crc kubenswrapper[4895]: I0129 16:48:51.925733 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abddb8da-df4e-4e84-806a-74a8d30fdfe4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:51 crc kubenswrapper[4895]: I0129 16:48:51.925743 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf6zv\" (UniqueName: \"kubernetes.io/projected/abddb8da-df4e-4e84-806a-74a8d30fdfe4-kube-api-access-wf6zv\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.317815 4895 generic.go:334] "Generic (PLEG): container finished" podID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" containerID="250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066" exitCode=0 Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.317902 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-868pk" event={"ID":"abddb8da-df4e-4e84-806a-74a8d30fdfe4","Type":"ContainerDied","Data":"250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066"} Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.317959 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-868pk" Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.317990 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-868pk" event={"ID":"abddb8da-df4e-4e84-806a-74a8d30fdfe4","Type":"ContainerDied","Data":"25a48874dbf27db0895238083c56ed103a9ef34db401434335fc45737ce3c3e1"} Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.318027 4895 scope.go:117] "RemoveContainer" containerID="250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066" Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.366898 4895 scope.go:117] "RemoveContainer" containerID="edb8aaef6c7ece9844b533463fc6c866cf28a1bf978fc567c760c81dc1d64540" Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.373440 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-868pk"] Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.382877 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-868pk"] Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.394671 4895 scope.go:117] "RemoveContainer" containerID="c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923" Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.432845 4895 scope.go:117] "RemoveContainer" containerID="250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066" Jan 29 16:48:52 crc kubenswrapper[4895]: E0129 16:48:52.433421 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066\": container with ID starting with 250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066 not found: ID does not exist" containerID="250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066" Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.433483 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066"} err="failed to get container status \"250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066\": rpc error: code = NotFound desc = could not find container \"250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066\": container with ID starting with 250cfc2df6241295fe7d94095213cac73a264d71d3824a03942d371374878066 not found: ID does not exist" Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.433523 4895 scope.go:117] "RemoveContainer" containerID="edb8aaef6c7ece9844b533463fc6c866cf28a1bf978fc567c760c81dc1d64540" Jan 29 16:48:52 crc kubenswrapper[4895]: E0129 16:48:52.434268 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edb8aaef6c7ece9844b533463fc6c866cf28a1bf978fc567c760c81dc1d64540\": container with ID starting with edb8aaef6c7ece9844b533463fc6c866cf28a1bf978fc567c760c81dc1d64540 not found: ID does not exist" containerID="edb8aaef6c7ece9844b533463fc6c866cf28a1bf978fc567c760c81dc1d64540" Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.434331 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb8aaef6c7ece9844b533463fc6c866cf28a1bf978fc567c760c81dc1d64540"} err="failed to get container status \"edb8aaef6c7ece9844b533463fc6c866cf28a1bf978fc567c760c81dc1d64540\": rpc error: code = NotFound desc = could not find container \"edb8aaef6c7ece9844b533463fc6c866cf28a1bf978fc567c760c81dc1d64540\": container with ID starting with edb8aaef6c7ece9844b533463fc6c866cf28a1bf978fc567c760c81dc1d64540 not found: ID does not exist" Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.434373 4895 scope.go:117] "RemoveContainer" containerID="c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923" Jan 29 16:48:52 crc kubenswrapper[4895]: E0129 16:48:52.434769 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923\": container with ID starting with c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923 not found: ID does not exist" containerID="c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923" Jan 29 16:48:52 crc kubenswrapper[4895]: I0129 16:48:52.434810 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923"} err="failed to get container status \"c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923\": rpc error: code = NotFound desc = could not find container \"c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923\": container with ID starting with c1cd439acffe76094237944cb070078fd0adfe450df98e4e69627cf4ebe26923 not found: ID does not exist" Jan 29 16:48:53 crc kubenswrapper[4895]: I0129 16:48:53.049130 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" path="/var/lib/kubelet/pods/abddb8da-df4e-4e84-806a-74a8d30fdfe4/volumes" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.269281 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gh7z8" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerName="registry-server" probeResult="failure" output=< Jan 29 16:48:55 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 16:48:55 crc kubenswrapper[4895]: > Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.378548 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gl9zs"] Jan 29 16:48:55 crc kubenswrapper[4895]: E0129 16:48:55.379037 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" containerName="registry-server" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.379061 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" containerName="registry-server" Jan 29 16:48:55 crc kubenswrapper[4895]: E0129 16:48:55.379078 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" containerName="extract-utilities" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.379086 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" containerName="extract-utilities" Jan 29 16:48:55 crc kubenswrapper[4895]: E0129 16:48:55.379103 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" containerName="extract-content" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.379109 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" containerName="extract-content" Jan 29 16:48:55 crc kubenswrapper[4895]: E0129 16:48:55.379118 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" containerName="registry-server" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.379124 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" containerName="registry-server" Jan 29 16:48:55 crc kubenswrapper[4895]: E0129 16:48:55.379148 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" containerName="extract-utilities" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.379158 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" containerName="extract-utilities" Jan 29 16:48:55 crc kubenswrapper[4895]: E0129 16:48:55.379176 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" containerName="extract-content" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.379183 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" containerName="extract-content" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.379374 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="abddb8da-df4e-4e84-806a-74a8d30fdfe4" containerName="registry-server" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.379391 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f4fef9-328a-4f4e-9cd9-547ffb15ab62" containerName="registry-server" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.381095 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.399402 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gl9zs"] Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.406001 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9r2d\" (UniqueName: \"kubernetes.io/projected/d35ad095-ddd8-4b52-9453-9e3f2818595c-kube-api-access-l9r2d\") pod \"certified-operators-gl9zs\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.406160 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-catalog-content\") pod \"certified-operators-gl9zs\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.406322 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-utilities\") pod \"certified-operators-gl9zs\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.508602 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9r2d\" (UniqueName: \"kubernetes.io/projected/d35ad095-ddd8-4b52-9453-9e3f2818595c-kube-api-access-l9r2d\") pod \"certified-operators-gl9zs\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.508679 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-catalog-content\") pod \"certified-operators-gl9zs\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.508733 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-utilities\") pod \"certified-operators-gl9zs\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.509566 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-catalog-content\") pod \"certified-operators-gl9zs\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.509582 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-utilities\") pod \"certified-operators-gl9zs\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.531042 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9r2d\" (UniqueName: \"kubernetes.io/projected/d35ad095-ddd8-4b52-9453-9e3f2818595c-kube-api-access-l9r2d\") pod \"certified-operators-gl9zs\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:48:55 crc kubenswrapper[4895]: I0129 16:48:55.709994 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:48:56 crc kubenswrapper[4895]: I0129 16:48:56.242563 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gl9zs"] Jan 29 16:48:56 crc kubenswrapper[4895]: I0129 16:48:56.374153 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl9zs" event={"ID":"d35ad095-ddd8-4b52-9453-9e3f2818595c","Type":"ContainerStarted","Data":"59eaa8a8ae1dc57fd6dda7968aa75e2ba43f8fc3d5dd3294a8b8c31a466333a4"} Jan 29 16:48:57 crc kubenswrapper[4895]: I0129 16:48:57.385499 4895 generic.go:334] "Generic (PLEG): container finished" podID="d35ad095-ddd8-4b52-9453-9e3f2818595c" containerID="de847773ed132b0d465d2f4b0493aace90099e5f9711dda79dc03f0172f08b31" exitCode=0 Jan 29 16:48:57 crc kubenswrapper[4895]: I0129 16:48:57.385607 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl9zs" event={"ID":"d35ad095-ddd8-4b52-9453-9e3f2818595c","Type":"ContainerDied","Data":"de847773ed132b0d465d2f4b0493aace90099e5f9711dda79dc03f0172f08b31"} Jan 29 16:48:59 crc kubenswrapper[4895]: I0129 16:48:59.405706 4895 generic.go:334] "Generic (PLEG): container finished" podID="d35ad095-ddd8-4b52-9453-9e3f2818595c" containerID="8695b1324210f802102895475bcc6fa597f61ccaf86ea7e3bc74a0adfb91962e" exitCode=0 Jan 29 16:48:59 crc kubenswrapper[4895]: I0129 16:48:59.405840 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl9zs" event={"ID":"d35ad095-ddd8-4b52-9453-9e3f2818595c","Type":"ContainerDied","Data":"8695b1324210f802102895475bcc6fa597f61ccaf86ea7e3bc74a0adfb91962e"} Jan 29 16:49:00 crc kubenswrapper[4895]: I0129 16:49:00.417380 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl9zs" event={"ID":"d35ad095-ddd8-4b52-9453-9e3f2818595c","Type":"ContainerStarted","Data":"a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9"} Jan 29 16:49:00 crc kubenswrapper[4895]: I0129 16:49:00.450901 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gl9zs" podStartSLOduration=3.001794685 podStartE2EDuration="5.450844289s" podCreationTimestamp="2026-01-29 16:48:55 +0000 UTC" firstStartedPulling="2026-01-29 16:48:57.390064507 +0000 UTC m=+2221.193041771" lastFinishedPulling="2026-01-29 16:48:59.839114111 +0000 UTC m=+2223.642091375" observedRunningTime="2026-01-29 16:49:00.438922278 +0000 UTC m=+2224.241899552" watchObservedRunningTime="2026-01-29 16:49:00.450844289 +0000 UTC m=+2224.253821553" Jan 29 16:49:04 crc kubenswrapper[4895]: I0129 16:49:04.298834 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:49:04 crc kubenswrapper[4895]: I0129 16:49:04.378794 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:49:04 crc kubenswrapper[4895]: I0129 16:49:04.548945 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gh7z8"] Jan 29 16:49:05 crc kubenswrapper[4895]: I0129 16:49:05.471718 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gh7z8" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerName="registry-server" containerID="cri-o://08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6" gracePeriod=2 Jan 29 16:49:05 crc kubenswrapper[4895]: I0129 16:49:05.710597 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:49:05 crc kubenswrapper[4895]: I0129 16:49:05.711179 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:49:05 crc kubenswrapper[4895]: I0129 16:49:05.784145 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:49:05 crc kubenswrapper[4895]: I0129 16:49:05.969399 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.152804 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9r8m\" (UniqueName: \"kubernetes.io/projected/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-kube-api-access-c9r8m\") pod \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.153143 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-utilities\") pod \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.153175 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-catalog-content\") pod \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\" (UID: \"2f39f0a6-0d3b-48c8-8e15-682583a6afe0\") " Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.153818 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-utilities" (OuterVolumeSpecName: "utilities") pod "2f39f0a6-0d3b-48c8-8e15-682583a6afe0" (UID: "2f39f0a6-0d3b-48c8-8e15-682583a6afe0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.163396 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-kube-api-access-c9r8m" (OuterVolumeSpecName: "kube-api-access-c9r8m") pod "2f39f0a6-0d3b-48c8-8e15-682583a6afe0" (UID: "2f39f0a6-0d3b-48c8-8e15-682583a6afe0"). InnerVolumeSpecName "kube-api-access-c9r8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.256148 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.256706 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9r8m\" (UniqueName: \"kubernetes.io/projected/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-kube-api-access-c9r8m\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.292554 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f39f0a6-0d3b-48c8-8e15-682583a6afe0" (UID: "2f39f0a6-0d3b-48c8-8e15-682583a6afe0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.358979 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f39f0a6-0d3b-48c8-8e15-682583a6afe0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.488925 4895 generic.go:334] "Generic (PLEG): container finished" podID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerID="08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6" exitCode=0 Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.489039 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh7z8" event={"ID":"2f39f0a6-0d3b-48c8-8e15-682583a6afe0","Type":"ContainerDied","Data":"08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6"} Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.489089 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gh7z8" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.489110 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gh7z8" event={"ID":"2f39f0a6-0d3b-48c8-8e15-682583a6afe0","Type":"ContainerDied","Data":"534bb4a14a35d6f77eb3426acffd1011b5e5cc6bbbe3f8cbd40740f18af29b4e"} Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.489141 4895 scope.go:117] "RemoveContainer" containerID="08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.522654 4895 scope.go:117] "RemoveContainer" containerID="176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.534197 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gh7z8"] Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.542271 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gh7z8"] Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.553326 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.575498 4895 scope.go:117] "RemoveContainer" containerID="3fc38c66bf5eb02ddb2acdcdfbb7678957b2c661bc64d1a14ba7ea4ef30929bb" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.626859 4895 scope.go:117] "RemoveContainer" containerID="08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6" Jan 29 16:49:06 crc kubenswrapper[4895]: E0129 16:49:06.627677 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6\": container with ID starting with 08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6 not found: ID does not exist" containerID="08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.627730 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6"} err="failed to get container status \"08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6\": rpc error: code = NotFound desc = could not find container \"08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6\": container with ID starting with 08fe9034fcbf6d8101b322ba0fd97449b1f26154dde62096be58168b8c5331b6 not found: ID does not exist" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.627762 4895 scope.go:117] "RemoveContainer" containerID="176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb" Jan 29 16:49:06 crc kubenswrapper[4895]: E0129 16:49:06.628349 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb\": container with ID starting with 176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb not found: ID does not exist" containerID="176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.628406 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb"} err="failed to get container status \"176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb\": rpc error: code = NotFound desc = could not find container \"176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb\": container with ID starting with 176d6428b0949051f44567827e659d41cdc72d938a5cfd3b76389e6daad5f0cb not found: ID does not exist" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.628439 4895 scope.go:117] "RemoveContainer" containerID="3fc38c66bf5eb02ddb2acdcdfbb7678957b2c661bc64d1a14ba7ea4ef30929bb" Jan 29 16:49:06 crc kubenswrapper[4895]: E0129 16:49:06.628942 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc38c66bf5eb02ddb2acdcdfbb7678957b2c661bc64d1a14ba7ea4ef30929bb\": container with ID starting with 3fc38c66bf5eb02ddb2acdcdfbb7678957b2c661bc64d1a14ba7ea4ef30929bb not found: ID does not exist" containerID="3fc38c66bf5eb02ddb2acdcdfbb7678957b2c661bc64d1a14ba7ea4ef30929bb" Jan 29 16:49:06 crc kubenswrapper[4895]: I0129 16:49:06.628962 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc38c66bf5eb02ddb2acdcdfbb7678957b2c661bc64d1a14ba7ea4ef30929bb"} err="failed to get container status \"3fc38c66bf5eb02ddb2acdcdfbb7678957b2c661bc64d1a14ba7ea4ef30929bb\": rpc error: code = NotFound desc = could not find container \"3fc38c66bf5eb02ddb2acdcdfbb7678957b2c661bc64d1a14ba7ea4ef30929bb\": container with ID starting with 3fc38c66bf5eb02ddb2acdcdfbb7678957b2c661bc64d1a14ba7ea4ef30929bb not found: ID does not exist" Jan 29 16:49:07 crc kubenswrapper[4895]: I0129 16:49:07.049492 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" path="/var/lib/kubelet/pods/2f39f0a6-0d3b-48c8-8e15-682583a6afe0/volumes" Jan 29 16:49:07 crc kubenswrapper[4895]: I0129 16:49:07.747181 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gl9zs"] Jan 29 16:49:08 crc kubenswrapper[4895]: I0129 16:49:08.511530 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gl9zs" podUID="d35ad095-ddd8-4b52-9453-9e3f2818595c" containerName="registry-server" containerID="cri-o://a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9" gracePeriod=2 Jan 29 16:49:08 crc kubenswrapper[4895]: I0129 16:49:08.986753 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.021057 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-utilities\") pod \"d35ad095-ddd8-4b52-9453-9e3f2818595c\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.023041 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-utilities" (OuterVolumeSpecName: "utilities") pod "d35ad095-ddd8-4b52-9453-9e3f2818595c" (UID: "d35ad095-ddd8-4b52-9453-9e3f2818595c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.122981 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9r2d\" (UniqueName: \"kubernetes.io/projected/d35ad095-ddd8-4b52-9453-9e3f2818595c-kube-api-access-l9r2d\") pod \"d35ad095-ddd8-4b52-9453-9e3f2818595c\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.123222 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-catalog-content\") pod \"d35ad095-ddd8-4b52-9453-9e3f2818595c\" (UID: \"d35ad095-ddd8-4b52-9453-9e3f2818595c\") " Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.123848 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.132843 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35ad095-ddd8-4b52-9453-9e3f2818595c-kube-api-access-l9r2d" (OuterVolumeSpecName: "kube-api-access-l9r2d") pod "d35ad095-ddd8-4b52-9453-9e3f2818595c" (UID: "d35ad095-ddd8-4b52-9453-9e3f2818595c"). InnerVolumeSpecName "kube-api-access-l9r2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.177204 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d35ad095-ddd8-4b52-9453-9e3f2818595c" (UID: "d35ad095-ddd8-4b52-9453-9e3f2818595c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.226353 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9r2d\" (UniqueName: \"kubernetes.io/projected/d35ad095-ddd8-4b52-9453-9e3f2818595c-kube-api-access-l9r2d\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.226403 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35ad095-ddd8-4b52-9453-9e3f2818595c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.524659 4895 generic.go:334] "Generic (PLEG): container finished" podID="d35ad095-ddd8-4b52-9453-9e3f2818595c" containerID="a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9" exitCode=0 Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.524720 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl9zs" event={"ID":"d35ad095-ddd8-4b52-9453-9e3f2818595c","Type":"ContainerDied","Data":"a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9"} Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.524762 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl9zs" event={"ID":"d35ad095-ddd8-4b52-9453-9e3f2818595c","Type":"ContainerDied","Data":"59eaa8a8ae1dc57fd6dda7968aa75e2ba43f8fc3d5dd3294a8b8c31a466333a4"} Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.524786 4895 scope.go:117] "RemoveContainer" containerID="a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.524801 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl9zs" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.566333 4895 scope.go:117] "RemoveContainer" containerID="8695b1324210f802102895475bcc6fa597f61ccaf86ea7e3bc74a0adfb91962e" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.573073 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gl9zs"] Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.587047 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gl9zs"] Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.595011 4895 scope.go:117] "RemoveContainer" containerID="de847773ed132b0d465d2f4b0493aace90099e5f9711dda79dc03f0172f08b31" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.631623 4895 scope.go:117] "RemoveContainer" containerID="a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9" Jan 29 16:49:09 crc kubenswrapper[4895]: E0129 16:49:09.632373 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9\": container with ID starting with a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9 not found: ID does not exist" containerID="a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.632426 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9"} err="failed to get container status \"a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9\": rpc error: code = NotFound desc = could not find container \"a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9\": container with ID starting with a73ab3854336fc2d8e47057293f18a89d65693ab235359407e6c4451c43734a9 not found: ID does not exist" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.632463 4895 scope.go:117] "RemoveContainer" containerID="8695b1324210f802102895475bcc6fa597f61ccaf86ea7e3bc74a0adfb91962e" Jan 29 16:49:09 crc kubenswrapper[4895]: E0129 16:49:09.632993 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8695b1324210f802102895475bcc6fa597f61ccaf86ea7e3bc74a0adfb91962e\": container with ID starting with 8695b1324210f802102895475bcc6fa597f61ccaf86ea7e3bc74a0adfb91962e not found: ID does not exist" containerID="8695b1324210f802102895475bcc6fa597f61ccaf86ea7e3bc74a0adfb91962e" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.633030 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8695b1324210f802102895475bcc6fa597f61ccaf86ea7e3bc74a0adfb91962e"} err="failed to get container status \"8695b1324210f802102895475bcc6fa597f61ccaf86ea7e3bc74a0adfb91962e\": rpc error: code = NotFound desc = could not find container \"8695b1324210f802102895475bcc6fa597f61ccaf86ea7e3bc74a0adfb91962e\": container with ID starting with 8695b1324210f802102895475bcc6fa597f61ccaf86ea7e3bc74a0adfb91962e not found: ID does not exist" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.633299 4895 scope.go:117] "RemoveContainer" containerID="de847773ed132b0d465d2f4b0493aace90099e5f9711dda79dc03f0172f08b31" Jan 29 16:49:09 crc kubenswrapper[4895]: E0129 16:49:09.633910 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de847773ed132b0d465d2f4b0493aace90099e5f9711dda79dc03f0172f08b31\": container with ID starting with de847773ed132b0d465d2f4b0493aace90099e5f9711dda79dc03f0172f08b31 not found: ID does not exist" containerID="de847773ed132b0d465d2f4b0493aace90099e5f9711dda79dc03f0172f08b31" Jan 29 16:49:09 crc kubenswrapper[4895]: I0129 16:49:09.634094 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de847773ed132b0d465d2f4b0493aace90099e5f9711dda79dc03f0172f08b31"} err="failed to get container status \"de847773ed132b0d465d2f4b0493aace90099e5f9711dda79dc03f0172f08b31\": rpc error: code = NotFound desc = could not find container \"de847773ed132b0d465d2f4b0493aace90099e5f9711dda79dc03f0172f08b31\": container with ID starting with de847773ed132b0d465d2f4b0493aace90099e5f9711dda79dc03f0172f08b31 not found: ID does not exist" Jan 29 16:49:11 crc kubenswrapper[4895]: I0129 16:49:11.060241 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d35ad095-ddd8-4b52-9453-9e3f2818595c" path="/var/lib/kubelet/pods/d35ad095-ddd8-4b52-9453-9e3f2818595c/volumes" Jan 29 16:50:57 crc kubenswrapper[4895]: I0129 16:50:57.822900 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:50:57 crc kubenswrapper[4895]: I0129 16:50:57.823538 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:51:27 crc kubenswrapper[4895]: I0129 16:51:27.823738 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:51:27 crc kubenswrapper[4895]: I0129 16:51:27.824420 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:51:57 crc kubenswrapper[4895]: I0129 16:51:57.824383 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:51:57 crc kubenswrapper[4895]: I0129 16:51:57.824979 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:51:57 crc kubenswrapper[4895]: I0129 16:51:57.825031 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 16:51:57 crc kubenswrapper[4895]: I0129 16:51:57.825967 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:51:57 crc kubenswrapper[4895]: I0129 16:51:57.826026 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" gracePeriod=600 Jan 29 16:51:57 crc kubenswrapper[4895]: E0129 16:51:57.955985 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:51:58 crc kubenswrapper[4895]: I0129 16:51:58.184135 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" exitCode=0 Jan 29 16:51:58 crc kubenswrapper[4895]: I0129 16:51:58.184194 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617"} Jan 29 16:51:58 crc kubenswrapper[4895]: I0129 16:51:58.184236 4895 scope.go:117] "RemoveContainer" containerID="d3e768471e09a634e8a9a0e0fd365a02d913a3c9ced9b63c4b19dc39c06cee68" Jan 29 16:51:58 crc kubenswrapper[4895]: I0129 16:51:58.186987 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:51:58 crc kubenswrapper[4895]: E0129 16:51:58.187908 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:52:10 crc kubenswrapper[4895]: I0129 16:52:10.037167 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:52:10 crc kubenswrapper[4895]: E0129 16:52:10.039569 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:52:21 crc kubenswrapper[4895]: I0129 16:52:21.037272 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:52:21 crc kubenswrapper[4895]: E0129 16:52:21.038801 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:52:22 crc kubenswrapper[4895]: I0129 16:52:22.120239 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="cf6f1e04-8de8-41c4-816a-b2293ca9886e" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.158:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:52:36 crc kubenswrapper[4895]: I0129 16:52:36.036772 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:52:36 crc kubenswrapper[4895]: E0129 16:52:36.037672 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.240533 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.256776 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.265956 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-lgkkk"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.286664 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sxv28"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.296301 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pdtsm"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.304654 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.312746 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.321121 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.330198 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.340664 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.349499 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.357985 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.368997 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gxcx2"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.377143 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gpx6h"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.384591 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pdtsm"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.392274 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-88h6k"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.400343 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8622n"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.408761 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djthk"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.417558 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9pqwl"] Jan 29 16:52:41 crc kubenswrapper[4895]: I0129 16:52:41.426087 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7jjts"] Jan 29 16:52:43 crc kubenswrapper[4895]: I0129 16:52:43.051429 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ceda164-c8e7-4eb4-8999-082294558365" path="/var/lib/kubelet/pods/1ceda164-c8e7-4eb4-8999-082294558365/volumes" Jan 29 16:52:43 crc kubenswrapper[4895]: I0129 16:52:43.052610 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e71dec6-dfb5-4b6d-822f-0b1da02025ce" path="/var/lib/kubelet/pods/3e71dec6-dfb5-4b6d-822f-0b1da02025ce/volumes" Jan 29 16:52:43 crc kubenswrapper[4895]: I0129 16:52:43.053393 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad" path="/var/lib/kubelet/pods/48d6bcd0-d1e2-4a05-ae60-bf1fdbac2aad/volumes" Jan 29 16:52:43 crc kubenswrapper[4895]: I0129 16:52:43.054405 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b" path="/var/lib/kubelet/pods/5e9a6bba-a6c1-4ee0-89a6-e36735a9e18b/volumes" Jan 29 16:52:43 crc kubenswrapper[4895]: I0129 16:52:43.056729 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76358913-8879-4dd3-8ca1-8dae5ff9a1b2" path="/var/lib/kubelet/pods/76358913-8879-4dd3-8ca1-8dae5ff9a1b2/volumes" Jan 29 16:52:43 crc kubenswrapper[4895]: I0129 16:52:43.057599 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="795c63db-36bb-49ad-9f75-f963d9c19ee9" path="/var/lib/kubelet/pods/795c63db-36bb-49ad-9f75-f963d9c19ee9/volumes" Jan 29 16:52:43 crc kubenswrapper[4895]: I0129 16:52:43.058318 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a3d7ffa-0c28-41bc-8701-e511ea083796" path="/var/lib/kubelet/pods/7a3d7ffa-0c28-41bc-8701-e511ea083796/volumes" Jan 29 16:52:43 crc kubenswrapper[4895]: I0129 16:52:43.060354 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="becfe08f-840f-4dbe-9bef-a7dc07254f3c" path="/var/lib/kubelet/pods/becfe08f-840f-4dbe-9bef-a7dc07254f3c/volumes" Jan 29 16:52:43 crc kubenswrapper[4895]: I0129 16:52:43.061302 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c181aaa5-19e0-4d8b-807b-b494677ec871" path="/var/lib/kubelet/pods/c181aaa5-19e0-4d8b-807b-b494677ec871/volumes" Jan 29 16:52:43 crc kubenswrapper[4895]: I0129 16:52:43.062154 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8413a12-38df-4b6d-92d4-5f6750ca05dd" path="/var/lib/kubelet/pods/d8413a12-38df-4b6d-92d4-5f6750ca05dd/volumes" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.643856 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq"] Jan 29 16:52:46 crc kubenswrapper[4895]: E0129 16:52:46.644907 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35ad095-ddd8-4b52-9453-9e3f2818595c" containerName="extract-utilities" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.644925 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35ad095-ddd8-4b52-9453-9e3f2818595c" containerName="extract-utilities" Jan 29 16:52:46 crc kubenswrapper[4895]: E0129 16:52:46.644935 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35ad095-ddd8-4b52-9453-9e3f2818595c" containerName="extract-content" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.644942 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35ad095-ddd8-4b52-9453-9e3f2818595c" containerName="extract-content" Jan 29 16:52:46 crc kubenswrapper[4895]: E0129 16:52:46.644953 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerName="extract-content" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.644960 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerName="extract-content" Jan 29 16:52:46 crc kubenswrapper[4895]: E0129 16:52:46.645017 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerName="registry-server" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.645023 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerName="registry-server" Jan 29 16:52:46 crc kubenswrapper[4895]: E0129 16:52:46.645039 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerName="extract-utilities" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.645046 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerName="extract-utilities" Jan 29 16:52:46 crc kubenswrapper[4895]: E0129 16:52:46.645061 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35ad095-ddd8-4b52-9453-9e3f2818595c" containerName="registry-server" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.645068 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35ad095-ddd8-4b52-9453-9e3f2818595c" containerName="registry-server" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.645249 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f39f0a6-0d3b-48c8-8e15-682583a6afe0" containerName="registry-server" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.645265 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35ad095-ddd8-4b52-9453-9e3f2818595c" containerName="registry-server" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.646033 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.648968 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.649013 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.649122 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.649133 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.649239 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.655769 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq"] Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.807030 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.807184 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcdvx\" (UniqueName: \"kubernetes.io/projected/5155d24f-53de-4346-bd5f-a5ba690d1a6d-kube-api-access-kcdvx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.807326 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.807361 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.807554 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.909771 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.909907 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcdvx\" (UniqueName: \"kubernetes.io/projected/5155d24f-53de-4346-bd5f-a5ba690d1a6d-kube-api-access-kcdvx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.909985 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.910016 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.910039 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.917819 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.918001 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.918425 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.918927 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:46 crc kubenswrapper[4895]: I0129 16:52:46.928614 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcdvx\" (UniqueName: \"kubernetes.io/projected/5155d24f-53de-4346-bd5f-a5ba690d1a6d-kube-api-access-kcdvx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:47 crc kubenswrapper[4895]: I0129 16:52:47.159115 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:52:47 crc kubenswrapper[4895]: I0129 16:52:47.704145 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq"] Jan 29 16:52:47 crc kubenswrapper[4895]: I0129 16:52:47.714925 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:52:48 crc kubenswrapper[4895]: I0129 16:52:48.664362 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" event={"ID":"5155d24f-53de-4346-bd5f-a5ba690d1a6d","Type":"ContainerStarted","Data":"85b0fe1b3e0db706c823332165e27e9014a9bb29697469050a90b7addd34f6fc"} Jan 29 16:52:48 crc kubenswrapper[4895]: I0129 16:52:48.664833 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" event={"ID":"5155d24f-53de-4346-bd5f-a5ba690d1a6d","Type":"ContainerStarted","Data":"e3b7f7f3026cba21eb7c239b03945ce1143fd103d088afafba1fa549b02f6eab"} Jan 29 16:52:48 crc kubenswrapper[4895]: I0129 16:52:48.689439 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" podStartSLOduration=2.157947673 podStartE2EDuration="2.68941233s" podCreationTimestamp="2026-01-29 16:52:46 +0000 UTC" firstStartedPulling="2026-01-29 16:52:47.714622575 +0000 UTC m=+2451.517599839" lastFinishedPulling="2026-01-29 16:52:48.246087232 +0000 UTC m=+2452.049064496" observedRunningTime="2026-01-29 16:52:48.6838938 +0000 UTC m=+2452.486871084" watchObservedRunningTime="2026-01-29 16:52:48.68941233 +0000 UTC m=+2452.492389594" Jan 29 16:52:51 crc kubenswrapper[4895]: I0129 16:52:51.037287 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:52:51 crc kubenswrapper[4895]: E0129 16:52:51.038391 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:53:02 crc kubenswrapper[4895]: I0129 16:53:02.203292 4895 scope.go:117] "RemoveContainer" containerID="5fc19dd6aafea7991a4517f16730cb95e6688ffd2964f1303031416048901a65" Jan 29 16:53:02 crc kubenswrapper[4895]: I0129 16:53:02.247507 4895 scope.go:117] "RemoveContainer" containerID="23b50276999418237f3cac7cfe00240748e884b1fb8b8cc7d17ffaba805b58c9" Jan 29 16:53:02 crc kubenswrapper[4895]: I0129 16:53:02.285457 4895 scope.go:117] "RemoveContainer" containerID="4cd308f04a0f081cde556b0d300ce6ec256329a79c6bb3e0335d7d10e0f41f13" Jan 29 16:53:02 crc kubenswrapper[4895]: I0129 16:53:02.316524 4895 scope.go:117] "RemoveContainer" containerID="628d1b4bcc8e8d296ac0c687451b1503607adc55cb9a6cb906a3c5a45ae96f01" Jan 29 16:53:02 crc kubenswrapper[4895]: I0129 16:53:02.369076 4895 scope.go:117] "RemoveContainer" containerID="d8ffa5e5f9cf3ab6b5ec82abdc4cc4fe49fa1ad96380ac220283be00a66f20d7" Jan 29 16:53:02 crc kubenswrapper[4895]: I0129 16:53:02.401225 4895 scope.go:117] "RemoveContainer" containerID="7831da4cbb000ea4ab55af544abfd485474a959c09bca57f168c7c0ea23536cc" Jan 29 16:53:02 crc kubenswrapper[4895]: I0129 16:53:02.467773 4895 scope.go:117] "RemoveContainer" containerID="4e29b2a01e6c3c212b51ac724a9d8d9277ecc863bc4365eaa8e4d3887cebb820" Jan 29 16:53:03 crc kubenswrapper[4895]: I0129 16:53:03.795577 4895 generic.go:334] "Generic (PLEG): container finished" podID="5155d24f-53de-4346-bd5f-a5ba690d1a6d" containerID="85b0fe1b3e0db706c823332165e27e9014a9bb29697469050a90b7addd34f6fc" exitCode=0 Jan 29 16:53:03 crc kubenswrapper[4895]: I0129 16:53:03.795672 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" event={"ID":"5155d24f-53de-4346-bd5f-a5ba690d1a6d","Type":"ContainerDied","Data":"85b0fe1b3e0db706c823332165e27e9014a9bb29697469050a90b7addd34f6fc"} Jan 29 16:53:04 crc kubenswrapper[4895]: I0129 16:53:04.037070 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:53:04 crc kubenswrapper[4895]: E0129 16:53:04.037569 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.536459 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.706165 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-repo-setup-combined-ca-bundle\") pod \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.706257 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-inventory\") pod \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.706389 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ssh-key-openstack-edpm-ipam\") pod \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.706551 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcdvx\" (UniqueName: \"kubernetes.io/projected/5155d24f-53de-4346-bd5f-a5ba690d1a6d-kube-api-access-kcdvx\") pod \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.706596 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ceph\") pod \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\" (UID: \"5155d24f-53de-4346-bd5f-a5ba690d1a6d\") " Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.712950 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ceph" (OuterVolumeSpecName: "ceph") pod "5155d24f-53de-4346-bd5f-a5ba690d1a6d" (UID: "5155d24f-53de-4346-bd5f-a5ba690d1a6d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.713812 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5155d24f-53de-4346-bd5f-a5ba690d1a6d" (UID: "5155d24f-53de-4346-bd5f-a5ba690d1a6d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.714659 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5155d24f-53de-4346-bd5f-a5ba690d1a6d-kube-api-access-kcdvx" (OuterVolumeSpecName: "kube-api-access-kcdvx") pod "5155d24f-53de-4346-bd5f-a5ba690d1a6d" (UID: "5155d24f-53de-4346-bd5f-a5ba690d1a6d"). InnerVolumeSpecName "kube-api-access-kcdvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.734668 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-inventory" (OuterVolumeSpecName: "inventory") pod "5155d24f-53de-4346-bd5f-a5ba690d1a6d" (UID: "5155d24f-53de-4346-bd5f-a5ba690d1a6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.737141 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5155d24f-53de-4346-bd5f-a5ba690d1a6d" (UID: "5155d24f-53de-4346-bd5f-a5ba690d1a6d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.808272 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.808324 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcdvx\" (UniqueName: \"kubernetes.io/projected/5155d24f-53de-4346-bd5f-a5ba690d1a6d-kube-api-access-kcdvx\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.808335 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.808344 4895 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.808354 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5155d24f-53de-4346-bd5f-a5ba690d1a6d-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.817214 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" event={"ID":"5155d24f-53de-4346-bd5f-a5ba690d1a6d","Type":"ContainerDied","Data":"e3b7f7f3026cba21eb7c239b03945ce1143fd103d088afafba1fa549b02f6eab"} Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.817292 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b7f7f3026cba21eb7c239b03945ce1143fd103d088afafba1fa549b02f6eab" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.817380 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.887851 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt"] Jan 29 16:53:05 crc kubenswrapper[4895]: E0129 16:53:05.888348 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5155d24f-53de-4346-bd5f-a5ba690d1a6d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.888375 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5155d24f-53de-4346-bd5f-a5ba690d1a6d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.888616 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5155d24f-53de-4346-bd5f-a5ba690d1a6d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.889382 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.894247 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.894264 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.894372 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.894515 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.899185 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:53:05 crc kubenswrapper[4895]: I0129 16:53:05.903153 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt"] Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.014142 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.014203 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4xbv\" (UniqueName: \"kubernetes.io/projected/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-kube-api-access-t4xbv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.014300 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.014350 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.014428 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.116659 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.116722 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4xbv\" (UniqueName: \"kubernetes.io/projected/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-kube-api-access-t4xbv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.116757 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.116790 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.116848 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.121550 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.121556 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.121670 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.122125 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.140859 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4xbv\" (UniqueName: \"kubernetes.io/projected/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-kube-api-access-t4xbv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.285039 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:53:06 crc kubenswrapper[4895]: I0129 16:53:06.839593 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt"] Jan 29 16:53:06 crc kubenswrapper[4895]: W0129 16:53:06.841628 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3be5cec1_ef17_4899_a276_f6f7b3cdb9f5.slice/crio-565a1a528993542a9cd58e66f5678063e8fb82a360d420f6446b8fdf435b8474 WatchSource:0}: Error finding container 565a1a528993542a9cd58e66f5678063e8fb82a360d420f6446b8fdf435b8474: Status 404 returned error can't find the container with id 565a1a528993542a9cd58e66f5678063e8fb82a360d420f6446b8fdf435b8474 Jan 29 16:53:07 crc kubenswrapper[4895]: I0129 16:53:07.841319 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" event={"ID":"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5","Type":"ContainerStarted","Data":"565a1a528993542a9cd58e66f5678063e8fb82a360d420f6446b8fdf435b8474"} Jan 29 16:53:09 crc kubenswrapper[4895]: I0129 16:53:09.864137 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" event={"ID":"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5","Type":"ContainerStarted","Data":"55ac2b49490b4ca9846028e71ae1d5ad40e351db3009ab126deca42ab3366f40"} Jan 29 16:53:09 crc kubenswrapper[4895]: I0129 16:53:09.885255 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" podStartSLOduration=3.186099663 podStartE2EDuration="4.885217628s" podCreationTimestamp="2026-01-29 16:53:05 +0000 UTC" firstStartedPulling="2026-01-29 16:53:06.845211942 +0000 UTC m=+2470.648189246" lastFinishedPulling="2026-01-29 16:53:08.544329957 +0000 UTC m=+2472.347307211" observedRunningTime="2026-01-29 16:53:09.881120587 +0000 UTC m=+2473.684097871" watchObservedRunningTime="2026-01-29 16:53:09.885217628 +0000 UTC m=+2473.688194892" Jan 29 16:53:17 crc kubenswrapper[4895]: I0129 16:53:17.043554 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:53:17 crc kubenswrapper[4895]: E0129 16:53:17.044480 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:53:32 crc kubenswrapper[4895]: I0129 16:53:32.036906 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:53:32 crc kubenswrapper[4895]: E0129 16:53:32.037685 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:53:46 crc kubenswrapper[4895]: I0129 16:53:46.038145 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:53:46 crc kubenswrapper[4895]: E0129 16:53:46.039384 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:53:57 crc kubenswrapper[4895]: I0129 16:53:57.043771 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:53:57 crc kubenswrapper[4895]: E0129 16:53:57.045800 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:54:02 crc kubenswrapper[4895]: I0129 16:54:02.628833 4895 scope.go:117] "RemoveContainer" containerID="7e05ca6f7763fc14ab8bacb5ae057612a39905bf5f524fcf5278e8a64b11a366" Jan 29 16:54:02 crc kubenswrapper[4895]: I0129 16:54:02.667268 4895 scope.go:117] "RemoveContainer" containerID="67b7f2cb442b0662c90f2a02ab4040b1db981136d5e369e078486be35aca5aef" Jan 29 16:54:02 crc kubenswrapper[4895]: I0129 16:54:02.707462 4895 scope.go:117] "RemoveContainer" containerID="659a015f30f5b7959443a32962882e55991bcd51f4537d24040c3c0e332807d7" Jan 29 16:54:12 crc kubenswrapper[4895]: I0129 16:54:12.036925 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:54:12 crc kubenswrapper[4895]: E0129 16:54:12.037776 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:54:26 crc kubenswrapper[4895]: I0129 16:54:26.037243 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:54:26 crc kubenswrapper[4895]: E0129 16:54:26.038393 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:54:39 crc kubenswrapper[4895]: I0129 16:54:39.037045 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:54:39 crc kubenswrapper[4895]: E0129 16:54:39.038097 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:54:43 crc kubenswrapper[4895]: I0129 16:54:43.725401 4895 generic.go:334] "Generic (PLEG): container finished" podID="3be5cec1-ef17-4899-a276-f6f7b3cdb9f5" containerID="55ac2b49490b4ca9846028e71ae1d5ad40e351db3009ab126deca42ab3366f40" exitCode=0 Jan 29 16:54:43 crc kubenswrapper[4895]: I0129 16:54:43.725525 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" event={"ID":"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5","Type":"ContainerDied","Data":"55ac2b49490b4ca9846028e71ae1d5ad40e351db3009ab126deca42ab3366f40"} Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.173085 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.203664 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ceph\") pod \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.204766 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-bootstrap-combined-ca-bundle\") pod \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.204902 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4xbv\" (UniqueName: \"kubernetes.io/projected/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-kube-api-access-t4xbv\") pod \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.204948 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-inventory\") pod \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.204998 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ssh-key-openstack-edpm-ipam\") pod \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\" (UID: \"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5\") " Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.212624 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ceph" (OuterVolumeSpecName: "ceph") pod "3be5cec1-ef17-4899-a276-f6f7b3cdb9f5" (UID: "3be5cec1-ef17-4899-a276-f6f7b3cdb9f5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.216133 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-kube-api-access-t4xbv" (OuterVolumeSpecName: "kube-api-access-t4xbv") pod "3be5cec1-ef17-4899-a276-f6f7b3cdb9f5" (UID: "3be5cec1-ef17-4899-a276-f6f7b3cdb9f5"). InnerVolumeSpecName "kube-api-access-t4xbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.216694 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3be5cec1-ef17-4899-a276-f6f7b3cdb9f5" (UID: "3be5cec1-ef17-4899-a276-f6f7b3cdb9f5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.238125 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-inventory" (OuterVolumeSpecName: "inventory") pod "3be5cec1-ef17-4899-a276-f6f7b3cdb9f5" (UID: "3be5cec1-ef17-4899-a276-f6f7b3cdb9f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.245856 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3be5cec1-ef17-4899-a276-f6f7b3cdb9f5" (UID: "3be5cec1-ef17-4899-a276-f6f7b3cdb9f5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.308342 4895 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.308411 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4xbv\" (UniqueName: \"kubernetes.io/projected/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-kube-api-access-t4xbv\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.308430 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.308444 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.308557 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3be5cec1-ef17-4899-a276-f6f7b3cdb9f5-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.749806 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" event={"ID":"3be5cec1-ef17-4899-a276-f6f7b3cdb9f5","Type":"ContainerDied","Data":"565a1a528993542a9cd58e66f5678063e8fb82a360d420f6446b8fdf435b8474"} Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.749904 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="565a1a528993542a9cd58e66f5678063e8fb82a360d420f6446b8fdf435b8474" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.749952 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.841550 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt"] Jan 29 16:54:45 crc kubenswrapper[4895]: E0129 16:54:45.842080 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be5cec1-ef17-4899-a276-f6f7b3cdb9f5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.842099 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be5cec1-ef17-4899-a276-f6f7b3cdb9f5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.842340 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be5cec1-ef17-4899-a276-f6f7b3cdb9f5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.843103 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.845219 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.846012 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.846425 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.846574 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.849926 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.853196 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt"] Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.923344 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.923407 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.923782 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckqz\" (UniqueName: \"kubernetes.io/projected/43df5196-f55f-497d-bf95-35b7b2b40a46-kube-api-access-6ckqz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:45 crc kubenswrapper[4895]: I0129 16:54:45.924494 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:46 crc kubenswrapper[4895]: I0129 16:54:46.026160 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ckqz\" (UniqueName: \"kubernetes.io/projected/43df5196-f55f-497d-bf95-35b7b2b40a46-kube-api-access-6ckqz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:46 crc kubenswrapper[4895]: I0129 16:54:46.026388 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:46 crc kubenswrapper[4895]: I0129 16:54:46.026485 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:46 crc kubenswrapper[4895]: I0129 16:54:46.026526 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:46 crc kubenswrapper[4895]: I0129 16:54:46.030861 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:46 crc kubenswrapper[4895]: I0129 16:54:46.030916 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:46 crc kubenswrapper[4895]: I0129 16:54:46.031466 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:46 crc kubenswrapper[4895]: I0129 16:54:46.046848 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ckqz\" (UniqueName: \"kubernetes.io/projected/43df5196-f55f-497d-bf95-35b7b2b40a46-kube-api-access-6ckqz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6htpt\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:46 crc kubenswrapper[4895]: I0129 16:54:46.160395 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:54:47 crc kubenswrapper[4895]: I0129 16:54:47.123758 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt"] Jan 29 16:54:47 crc kubenswrapper[4895]: I0129 16:54:47.769966 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" event={"ID":"43df5196-f55f-497d-bf95-35b7b2b40a46","Type":"ContainerStarted","Data":"2f039293cd566a5775de708ccc54a8b33166f83bdcb87620a37530a7095787a1"} Jan 29 16:54:48 crc kubenswrapper[4895]: I0129 16:54:48.778805 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" event={"ID":"43df5196-f55f-497d-bf95-35b7b2b40a46","Type":"ContainerStarted","Data":"4f89647b90bf5a4e01a504c3d91fff86f81ee9d047965ba14321087dc751cb02"} Jan 29 16:54:48 crc kubenswrapper[4895]: I0129 16:54:48.806928 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" podStartSLOduration=2.876086858 podStartE2EDuration="3.806901371s" podCreationTimestamp="2026-01-29 16:54:45 +0000 UTC" firstStartedPulling="2026-01-29 16:54:47.127267435 +0000 UTC m=+2570.930244719" lastFinishedPulling="2026-01-29 16:54:48.058081968 +0000 UTC m=+2571.861059232" observedRunningTime="2026-01-29 16:54:48.799098519 +0000 UTC m=+2572.602075783" watchObservedRunningTime="2026-01-29 16:54:48.806901371 +0000 UTC m=+2572.609878665" Jan 29 16:54:52 crc kubenswrapper[4895]: I0129 16:54:52.037142 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:54:52 crc kubenswrapper[4895]: E0129 16:54:52.037931 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:55:06 crc kubenswrapper[4895]: I0129 16:55:06.037806 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:55:06 crc kubenswrapper[4895]: E0129 16:55:06.039248 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:55:14 crc kubenswrapper[4895]: I0129 16:55:14.009581 4895 generic.go:334] "Generic (PLEG): container finished" podID="43df5196-f55f-497d-bf95-35b7b2b40a46" containerID="4f89647b90bf5a4e01a504c3d91fff86f81ee9d047965ba14321087dc751cb02" exitCode=0 Jan 29 16:55:14 crc kubenswrapper[4895]: I0129 16:55:14.009673 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" event={"ID":"43df5196-f55f-497d-bf95-35b7b2b40a46","Type":"ContainerDied","Data":"4f89647b90bf5a4e01a504c3d91fff86f81ee9d047965ba14321087dc751cb02"} Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.466601 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.496511 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-inventory\") pod \"43df5196-f55f-497d-bf95-35b7b2b40a46\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.496684 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ckqz\" (UniqueName: \"kubernetes.io/projected/43df5196-f55f-497d-bf95-35b7b2b40a46-kube-api-access-6ckqz\") pod \"43df5196-f55f-497d-bf95-35b7b2b40a46\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.496725 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ceph\") pod \"43df5196-f55f-497d-bf95-35b7b2b40a46\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.496839 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ssh-key-openstack-edpm-ipam\") pod \"43df5196-f55f-497d-bf95-35b7b2b40a46\" (UID: \"43df5196-f55f-497d-bf95-35b7b2b40a46\") " Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.504512 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ceph" (OuterVolumeSpecName: "ceph") pod "43df5196-f55f-497d-bf95-35b7b2b40a46" (UID: "43df5196-f55f-497d-bf95-35b7b2b40a46"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.509467 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43df5196-f55f-497d-bf95-35b7b2b40a46-kube-api-access-6ckqz" (OuterVolumeSpecName: "kube-api-access-6ckqz") pod "43df5196-f55f-497d-bf95-35b7b2b40a46" (UID: "43df5196-f55f-497d-bf95-35b7b2b40a46"). InnerVolumeSpecName "kube-api-access-6ckqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.527261 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "43df5196-f55f-497d-bf95-35b7b2b40a46" (UID: "43df5196-f55f-497d-bf95-35b7b2b40a46"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.530665 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-inventory" (OuterVolumeSpecName: "inventory") pod "43df5196-f55f-497d-bf95-35b7b2b40a46" (UID: "43df5196-f55f-497d-bf95-35b7b2b40a46"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.599378 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.599420 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.599430 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ckqz\" (UniqueName: \"kubernetes.io/projected/43df5196-f55f-497d-bf95-35b7b2b40a46-kube-api-access-6ckqz\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:15 crc kubenswrapper[4895]: I0129 16:55:15.599442 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/43df5196-f55f-497d-bf95-35b7b2b40a46-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.037953 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" event={"ID":"43df5196-f55f-497d-bf95-35b7b2b40a46","Type":"ContainerDied","Data":"2f039293cd566a5775de708ccc54a8b33166f83bdcb87620a37530a7095787a1"} Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.038360 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f039293cd566a5775de708ccc54a8b33166f83bdcb87620a37530a7095787a1" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.038060 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6htpt" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.133129 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn"] Jan 29 16:55:16 crc kubenswrapper[4895]: E0129 16:55:16.135387 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43df5196-f55f-497d-bf95-35b7b2b40a46" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.135452 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="43df5196-f55f-497d-bf95-35b7b2b40a46" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.136301 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="43df5196-f55f-497d-bf95-35b7b2b40a46" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.143917 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.148404 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.150595 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.150923 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.154108 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.154131 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.157077 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn"] Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.210369 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.210476 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.210535 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.210747 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6crgc\" (UniqueName: \"kubernetes.io/projected/3e9466f5-f2f5-43f4-9347-84084177d1df-kube-api-access-6crgc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.313165 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6crgc\" (UniqueName: \"kubernetes.io/projected/3e9466f5-f2f5-43f4-9347-84084177d1df-kube-api-access-6crgc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.313217 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.313264 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.313298 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.320080 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.320158 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.320170 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.331184 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6crgc\" (UniqueName: \"kubernetes.io/projected/3e9466f5-f2f5-43f4-9347-84084177d1df-kube-api-access-6crgc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:16 crc kubenswrapper[4895]: I0129 16:55:16.475505 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:17 crc kubenswrapper[4895]: I0129 16:55:17.054641 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn"] Jan 29 16:55:18 crc kubenswrapper[4895]: I0129 16:55:18.037404 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:55:18 crc kubenswrapper[4895]: E0129 16:55:18.038431 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:55:18 crc kubenswrapper[4895]: I0129 16:55:18.062034 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" event={"ID":"3e9466f5-f2f5-43f4-9347-84084177d1df","Type":"ContainerStarted","Data":"d1bf0f727df3536aad8a1229457f2bb50c247b743f638ff14b6143ae33180a61"} Jan 29 16:55:18 crc kubenswrapper[4895]: I0129 16:55:18.062156 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" event={"ID":"3e9466f5-f2f5-43f4-9347-84084177d1df","Type":"ContainerStarted","Data":"735eb6e1d4c5614d8ee0fafe7be73253377eb6fee901568af9b1ec18595f4c87"} Jan 29 16:55:18 crc kubenswrapper[4895]: I0129 16:55:18.091073 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" podStartSLOduration=1.6698191580000001 podStartE2EDuration="2.091048679s" podCreationTimestamp="2026-01-29 16:55:16 +0000 UTC" firstStartedPulling="2026-01-29 16:55:17.047607744 +0000 UTC m=+2600.850585008" lastFinishedPulling="2026-01-29 16:55:17.468837265 +0000 UTC m=+2601.271814529" observedRunningTime="2026-01-29 16:55:18.086451554 +0000 UTC m=+2601.889428808" watchObservedRunningTime="2026-01-29 16:55:18.091048679 +0000 UTC m=+2601.894025943" Jan 29 16:55:23 crc kubenswrapper[4895]: I0129 16:55:23.105964 4895 generic.go:334] "Generic (PLEG): container finished" podID="3e9466f5-f2f5-43f4-9347-84084177d1df" containerID="d1bf0f727df3536aad8a1229457f2bb50c247b743f638ff14b6143ae33180a61" exitCode=0 Jan 29 16:55:23 crc kubenswrapper[4895]: I0129 16:55:23.106077 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" event={"ID":"3e9466f5-f2f5-43f4-9347-84084177d1df","Type":"ContainerDied","Data":"d1bf0f727df3536aad8a1229457f2bb50c247b743f638ff14b6143ae33180a61"} Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.534516 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.596346 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ceph\") pod \"3e9466f5-f2f5-43f4-9347-84084177d1df\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.596477 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ssh-key-openstack-edpm-ipam\") pod \"3e9466f5-f2f5-43f4-9347-84084177d1df\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.596673 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6crgc\" (UniqueName: \"kubernetes.io/projected/3e9466f5-f2f5-43f4-9347-84084177d1df-kube-api-access-6crgc\") pod \"3e9466f5-f2f5-43f4-9347-84084177d1df\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.596731 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-inventory\") pod \"3e9466f5-f2f5-43f4-9347-84084177d1df\" (UID: \"3e9466f5-f2f5-43f4-9347-84084177d1df\") " Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.603687 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ceph" (OuterVolumeSpecName: "ceph") pod "3e9466f5-f2f5-43f4-9347-84084177d1df" (UID: "3e9466f5-f2f5-43f4-9347-84084177d1df"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.604571 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9466f5-f2f5-43f4-9347-84084177d1df-kube-api-access-6crgc" (OuterVolumeSpecName: "kube-api-access-6crgc") pod "3e9466f5-f2f5-43f4-9347-84084177d1df" (UID: "3e9466f5-f2f5-43f4-9347-84084177d1df"). InnerVolumeSpecName "kube-api-access-6crgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.624277 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-inventory" (OuterVolumeSpecName: "inventory") pod "3e9466f5-f2f5-43f4-9347-84084177d1df" (UID: "3e9466f5-f2f5-43f4-9347-84084177d1df"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.624600 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3e9466f5-f2f5-43f4-9347-84084177d1df" (UID: "3e9466f5-f2f5-43f4-9347-84084177d1df"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.699732 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.699798 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.699815 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6crgc\" (UniqueName: \"kubernetes.io/projected/3e9466f5-f2f5-43f4-9347-84084177d1df-kube-api-access-6crgc\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:24 crc kubenswrapper[4895]: I0129 16:55:24.699828 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e9466f5-f2f5-43f4-9347-84084177d1df-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.127540 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" event={"ID":"3e9466f5-f2f5-43f4-9347-84084177d1df","Type":"ContainerDied","Data":"735eb6e1d4c5614d8ee0fafe7be73253377eb6fee901568af9b1ec18595f4c87"} Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.127604 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735eb6e1d4c5614d8ee0fafe7be73253377eb6fee901568af9b1ec18595f4c87" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.127664 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.208774 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb"] Jan 29 16:55:25 crc kubenswrapper[4895]: E0129 16:55:25.209241 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9466f5-f2f5-43f4-9347-84084177d1df" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.209265 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9466f5-f2f5-43f4-9347-84084177d1df" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.209519 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9466f5-f2f5-43f4-9347-84084177d1df" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.210378 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.215087 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.215342 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.215582 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.215831 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.216009 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.219217 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb"] Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.310348 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.310502 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.310687 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.310741 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fj6w\" (UniqueName: \"kubernetes.io/projected/37a09037-a0fd-4fa0-94de-a819953a38a1-kube-api-access-9fj6w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.413730 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.413915 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.414025 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.414059 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fj6w\" (UniqueName: \"kubernetes.io/projected/37a09037-a0fd-4fa0-94de-a819953a38a1-kube-api-access-9fj6w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.422648 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.423352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.423376 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.438352 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fj6w\" (UniqueName: \"kubernetes.io/projected/37a09037-a0fd-4fa0-94de-a819953a38a1-kube-api-access-9fj6w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvqdb\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:25 crc kubenswrapper[4895]: I0129 16:55:25.526980 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:55:26 crc kubenswrapper[4895]: I0129 16:55:26.105320 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb"] Jan 29 16:55:26 crc kubenswrapper[4895]: I0129 16:55:26.137524 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" event={"ID":"37a09037-a0fd-4fa0-94de-a819953a38a1","Type":"ContainerStarted","Data":"d3001eac85b5fb097e2f37a8f4c39982d6c23a80cb179e1b56d688a248e15e75"} Jan 29 16:55:28 crc kubenswrapper[4895]: I0129 16:55:28.157708 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" event={"ID":"37a09037-a0fd-4fa0-94de-a819953a38a1","Type":"ContainerStarted","Data":"789aba623a5c7dc55ae463922525a32d14c770ef85fdb11b357e0e37af301e3e"} Jan 29 16:55:28 crc kubenswrapper[4895]: I0129 16:55:28.178674 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" podStartSLOduration=2.431314791 podStartE2EDuration="3.178642974s" podCreationTimestamp="2026-01-29 16:55:25 +0000 UTC" firstStartedPulling="2026-01-29 16:55:26.111443539 +0000 UTC m=+2609.914420803" lastFinishedPulling="2026-01-29 16:55:26.858771712 +0000 UTC m=+2610.661748986" observedRunningTime="2026-01-29 16:55:28.173423373 +0000 UTC m=+2611.976400667" watchObservedRunningTime="2026-01-29 16:55:28.178642974 +0000 UTC m=+2611.981620238" Jan 29 16:55:30 crc kubenswrapper[4895]: I0129 16:55:30.037110 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:55:30 crc kubenswrapper[4895]: E0129 16:55:30.037803 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:55:45 crc kubenswrapper[4895]: I0129 16:55:45.038049 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:55:45 crc kubenswrapper[4895]: E0129 16:55:45.039283 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:55:58 crc kubenswrapper[4895]: I0129 16:55:58.037304 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:55:58 crc kubenswrapper[4895]: E0129 16:55:58.038138 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:56:03 crc kubenswrapper[4895]: I0129 16:56:03.479430 4895 generic.go:334] "Generic (PLEG): container finished" podID="37a09037-a0fd-4fa0-94de-a819953a38a1" containerID="789aba623a5c7dc55ae463922525a32d14c770ef85fdb11b357e0e37af301e3e" exitCode=0 Jan 29 16:56:03 crc kubenswrapper[4895]: I0129 16:56:03.479533 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" event={"ID":"37a09037-a0fd-4fa0-94de-a819953a38a1","Type":"ContainerDied","Data":"789aba623a5c7dc55ae463922525a32d14c770ef85fdb11b357e0e37af301e3e"} Jan 29 16:56:04 crc kubenswrapper[4895]: I0129 16:56:04.921767 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.045016 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ceph\") pod \"37a09037-a0fd-4fa0-94de-a819953a38a1\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.045223 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fj6w\" (UniqueName: \"kubernetes.io/projected/37a09037-a0fd-4fa0-94de-a819953a38a1-kube-api-access-9fj6w\") pod \"37a09037-a0fd-4fa0-94de-a819953a38a1\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.045377 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-inventory\") pod \"37a09037-a0fd-4fa0-94de-a819953a38a1\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.045425 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ssh-key-openstack-edpm-ipam\") pod \"37a09037-a0fd-4fa0-94de-a819953a38a1\" (UID: \"37a09037-a0fd-4fa0-94de-a819953a38a1\") " Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.052422 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ceph" (OuterVolumeSpecName: "ceph") pod "37a09037-a0fd-4fa0-94de-a819953a38a1" (UID: "37a09037-a0fd-4fa0-94de-a819953a38a1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.052481 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a09037-a0fd-4fa0-94de-a819953a38a1-kube-api-access-9fj6w" (OuterVolumeSpecName: "kube-api-access-9fj6w") pod "37a09037-a0fd-4fa0-94de-a819953a38a1" (UID: "37a09037-a0fd-4fa0-94de-a819953a38a1"). InnerVolumeSpecName "kube-api-access-9fj6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.073272 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-inventory" (OuterVolumeSpecName: "inventory") pod "37a09037-a0fd-4fa0-94de-a819953a38a1" (UID: "37a09037-a0fd-4fa0-94de-a819953a38a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.076659 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "37a09037-a0fd-4fa0-94de-a819953a38a1" (UID: "37a09037-a0fd-4fa0-94de-a819953a38a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.149617 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.149658 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.149670 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37a09037-a0fd-4fa0-94de-a819953a38a1-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.149680 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fj6w\" (UniqueName: \"kubernetes.io/projected/37a09037-a0fd-4fa0-94de-a819953a38a1-kube-api-access-9fj6w\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.516187 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" event={"ID":"37a09037-a0fd-4fa0-94de-a819953a38a1","Type":"ContainerDied","Data":"d3001eac85b5fb097e2f37a8f4c39982d6c23a80cb179e1b56d688a248e15e75"} Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.516384 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3001eac85b5fb097e2f37a8f4c39982d6c23a80cb179e1b56d688a248e15e75" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.516508 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvqdb" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.598366 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d"] Jan 29 16:56:05 crc kubenswrapper[4895]: E0129 16:56:05.599220 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a09037-a0fd-4fa0-94de-a819953a38a1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.599248 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a09037-a0fd-4fa0-94de-a819953a38a1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.599521 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a09037-a0fd-4fa0-94de-a819953a38a1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.600465 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.608413 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.608670 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.608941 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.609114 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.611162 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.625744 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d"] Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.767910 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.768341 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.768369 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqpjm\" (UniqueName: \"kubernetes.io/projected/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-kube-api-access-kqpjm\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.768391 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.871544 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.871678 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.871705 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqpjm\" (UniqueName: \"kubernetes.io/projected/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-kube-api-access-kqpjm\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.871750 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.879628 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.883475 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.887614 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.896368 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqpjm\" (UniqueName: \"kubernetes.io/projected/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-kube-api-access-kqpjm\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:05 crc kubenswrapper[4895]: I0129 16:56:05.937940 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:06 crc kubenswrapper[4895]: I0129 16:56:06.508391 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d"] Jan 29 16:56:06 crc kubenswrapper[4895]: I0129 16:56:06.527083 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" event={"ID":"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394","Type":"ContainerStarted","Data":"cd3800caf8f0406c0ebaf56ff582fd08d582126fc08c5bf9f2dbd079aa10d84f"} Jan 29 16:56:07 crc kubenswrapper[4895]: I0129 16:56:07.539897 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" event={"ID":"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394","Type":"ContainerStarted","Data":"6865c802695407b44783c8b0e6bfa51a2a840a5b1df5fc746dac721a77a62849"} Jan 29 16:56:07 crc kubenswrapper[4895]: I0129 16:56:07.564400 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" podStartSLOduration=2.086954995 podStartE2EDuration="2.564382247s" podCreationTimestamp="2026-01-29 16:56:05 +0000 UTC" firstStartedPulling="2026-01-29 16:56:06.51972747 +0000 UTC m=+2650.322704734" lastFinishedPulling="2026-01-29 16:56:06.997154722 +0000 UTC m=+2650.800131986" observedRunningTime="2026-01-29 16:56:07.561078148 +0000 UTC m=+2651.364055422" watchObservedRunningTime="2026-01-29 16:56:07.564382247 +0000 UTC m=+2651.367359501" Jan 29 16:56:11 crc kubenswrapper[4895]: I0129 16:56:11.037012 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:56:11 crc kubenswrapper[4895]: E0129 16:56:11.038591 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:56:11 crc kubenswrapper[4895]: I0129 16:56:11.574809 4895 generic.go:334] "Generic (PLEG): container finished" podID="dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394" containerID="6865c802695407b44783c8b0e6bfa51a2a840a5b1df5fc746dac721a77a62849" exitCode=0 Jan 29 16:56:11 crc kubenswrapper[4895]: I0129 16:56:11.575050 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" event={"ID":"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394","Type":"ContainerDied","Data":"6865c802695407b44783c8b0e6bfa51a2a840a5b1df5fc746dac721a77a62849"} Jan 29 16:56:12 crc kubenswrapper[4895]: I0129 16:56:12.993807 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.119977 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ssh-key-openstack-edpm-ipam\") pod \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.120073 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqpjm\" (UniqueName: \"kubernetes.io/projected/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-kube-api-access-kqpjm\") pod \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.120224 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-inventory\") pod \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.120252 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ceph\") pod \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\" (UID: \"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394\") " Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.126819 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-kube-api-access-kqpjm" (OuterVolumeSpecName: "kube-api-access-kqpjm") pod "dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394" (UID: "dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394"). InnerVolumeSpecName "kube-api-access-kqpjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.127639 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ceph" (OuterVolumeSpecName: "ceph") pod "dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394" (UID: "dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.146732 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394" (UID: "dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.168756 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-inventory" (OuterVolumeSpecName: "inventory") pod "dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394" (UID: "dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.222909 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.222959 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqpjm\" (UniqueName: \"kubernetes.io/projected/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-kube-api-access-kqpjm\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.222974 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.222987 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.593688 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" event={"ID":"dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394","Type":"ContainerDied","Data":"cd3800caf8f0406c0ebaf56ff582fd08d582126fc08c5bf9f2dbd079aa10d84f"} Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.593731 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd3800caf8f0406c0ebaf56ff582fd08d582126fc08c5bf9f2dbd079aa10d84f" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.593781 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.743626 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t"] Jan 29 16:56:13 crc kubenswrapper[4895]: E0129 16:56:13.744026 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.744044 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.744265 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.744911 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.747566 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.747673 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.747756 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.748214 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.748363 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.776025 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t"] Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.836534 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.836666 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjgqk\" (UniqueName: \"kubernetes.io/projected/8b5bbe74-3ed2-4061-bc48-cd76433873da-kube-api-access-mjgqk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.837136 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.837326 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.944232 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.944350 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.944386 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.944647 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjgqk\" (UniqueName: \"kubernetes.io/projected/8b5bbe74-3ed2-4061-bc48-cd76433873da-kube-api-access-mjgqk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.964579 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.964607 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.964799 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjgqk\" (UniqueName: \"kubernetes.io/projected/8b5bbe74-3ed2-4061-bc48-cd76433873da-kube-api-access-mjgqk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:13 crc kubenswrapper[4895]: I0129 16:56:13.968380 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:14 crc kubenswrapper[4895]: I0129 16:56:14.062442 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:56:14 crc kubenswrapper[4895]: I0129 16:56:14.589436 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t"] Jan 29 16:56:14 crc kubenswrapper[4895]: I0129 16:56:14.603847 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" event={"ID":"8b5bbe74-3ed2-4061-bc48-cd76433873da","Type":"ContainerStarted","Data":"ac7a6a437da5ad1c7550ce3032ba806086571d6a5e89415ae9ba65f5663692a6"} Jan 29 16:56:15 crc kubenswrapper[4895]: I0129 16:56:15.613186 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" event={"ID":"8b5bbe74-3ed2-4061-bc48-cd76433873da","Type":"ContainerStarted","Data":"175d536505278741d74b36fe33fe3aad379319724dbdf2e6644e51480cad99d8"} Jan 29 16:56:15 crc kubenswrapper[4895]: I0129 16:56:15.637312 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" podStartSLOduration=2.100633182 podStartE2EDuration="2.637280186s" podCreationTimestamp="2026-01-29 16:56:13 +0000 UTC" firstStartedPulling="2026-01-29 16:56:14.595425879 +0000 UTC m=+2658.398403133" lastFinishedPulling="2026-01-29 16:56:15.132072873 +0000 UTC m=+2658.935050137" observedRunningTime="2026-01-29 16:56:15.629546916 +0000 UTC m=+2659.432524220" watchObservedRunningTime="2026-01-29 16:56:15.637280186 +0000 UTC m=+2659.440257560" Jan 29 16:56:25 crc kubenswrapper[4895]: I0129 16:56:25.038028 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:56:25 crc kubenswrapper[4895]: E0129 16:56:25.038945 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:56:37 crc kubenswrapper[4895]: I0129 16:56:37.042075 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:56:37 crc kubenswrapper[4895]: E0129 16:56:37.043064 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:56:48 crc kubenswrapper[4895]: I0129 16:56:48.038078 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:56:48 crc kubenswrapper[4895]: E0129 16:56:48.039362 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 16:57:02 crc kubenswrapper[4895]: I0129 16:57:02.037124 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 16:57:03 crc kubenswrapper[4895]: I0129 16:57:03.113207 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"8076c5b4874e3a644860820766402316462025e7cb5b586483b35291947d5378"} Jan 29 16:57:04 crc kubenswrapper[4895]: I0129 16:57:04.131924 4895 generic.go:334] "Generic (PLEG): container finished" podID="8b5bbe74-3ed2-4061-bc48-cd76433873da" containerID="175d536505278741d74b36fe33fe3aad379319724dbdf2e6644e51480cad99d8" exitCode=0 Jan 29 16:57:04 crc kubenswrapper[4895]: I0129 16:57:04.132099 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" event={"ID":"8b5bbe74-3ed2-4061-bc48-cd76433873da","Type":"ContainerDied","Data":"175d536505278741d74b36fe33fe3aad379319724dbdf2e6644e51480cad99d8"} Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.655064 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.759600 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ceph\") pod \"8b5bbe74-3ed2-4061-bc48-cd76433873da\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.759678 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjgqk\" (UniqueName: \"kubernetes.io/projected/8b5bbe74-3ed2-4061-bc48-cd76433873da-kube-api-access-mjgqk\") pod \"8b5bbe74-3ed2-4061-bc48-cd76433873da\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.759770 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-inventory\") pod \"8b5bbe74-3ed2-4061-bc48-cd76433873da\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.759815 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ssh-key-openstack-edpm-ipam\") pod \"8b5bbe74-3ed2-4061-bc48-cd76433873da\" (UID: \"8b5bbe74-3ed2-4061-bc48-cd76433873da\") " Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.766983 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ceph" (OuterVolumeSpecName: "ceph") pod "8b5bbe74-3ed2-4061-bc48-cd76433873da" (UID: "8b5bbe74-3ed2-4061-bc48-cd76433873da"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.768094 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5bbe74-3ed2-4061-bc48-cd76433873da-kube-api-access-mjgqk" (OuterVolumeSpecName: "kube-api-access-mjgqk") pod "8b5bbe74-3ed2-4061-bc48-cd76433873da" (UID: "8b5bbe74-3ed2-4061-bc48-cd76433873da"). InnerVolumeSpecName "kube-api-access-mjgqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.792735 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b5bbe74-3ed2-4061-bc48-cd76433873da" (UID: "8b5bbe74-3ed2-4061-bc48-cd76433873da"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.793260 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-inventory" (OuterVolumeSpecName: "inventory") pod "8b5bbe74-3ed2-4061-bc48-cd76433873da" (UID: "8b5bbe74-3ed2-4061-bc48-cd76433873da"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.862198 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.862260 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjgqk\" (UniqueName: \"kubernetes.io/projected/8b5bbe74-3ed2-4061-bc48-cd76433873da-kube-api-access-mjgqk\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.862280 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:05 crc kubenswrapper[4895]: I0129 16:57:05.862292 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b5bbe74-3ed2-4061-bc48-cd76433873da-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.157931 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" event={"ID":"8b5bbe74-3ed2-4061-bc48-cd76433873da","Type":"ContainerDied","Data":"ac7a6a437da5ad1c7550ce3032ba806086571d6a5e89415ae9ba65f5663692a6"} Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.157995 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.157987 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac7a6a437da5ad1c7550ce3032ba806086571d6a5e89415ae9ba65f5663692a6" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.254379 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9gqfn"] Jan 29 16:57:06 crc kubenswrapper[4895]: E0129 16:57:06.257626 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5bbe74-3ed2-4061-bc48-cd76433873da" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.257806 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5bbe74-3ed2-4061-bc48-cd76433873da" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.258239 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5bbe74-3ed2-4061-bc48-cd76433873da" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.259262 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.262194 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.262981 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.263016 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.263448 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.263692 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.275242 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9gqfn"] Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.275846 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ceph\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.276000 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mh4r\" (UniqueName: \"kubernetes.io/projected/36973575-f9d7-4d47-b222-5072acf5317d-kube-api-access-7mh4r\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.276043 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.276071 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.377643 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mh4r\" (UniqueName: \"kubernetes.io/projected/36973575-f9d7-4d47-b222-5072acf5317d-kube-api-access-7mh4r\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.377718 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.377754 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.378888 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ceph\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.382283 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.382712 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.387350 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ceph\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.397099 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mh4r\" (UniqueName: \"kubernetes.io/projected/36973575-f9d7-4d47-b222-5072acf5317d-kube-api-access-7mh4r\") pod \"ssh-known-hosts-edpm-deployment-9gqfn\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:06 crc kubenswrapper[4895]: I0129 16:57:06.583437 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:07 crc kubenswrapper[4895]: I0129 16:57:07.196472 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9gqfn"] Jan 29 16:57:07 crc kubenswrapper[4895]: W0129 16:57:07.203680 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36973575_f9d7_4d47_b222_5072acf5317d.slice/crio-2a6b8faeccb4be38edb00f8c76bae94948510f91c9e62d5085e322afe400727a WatchSource:0}: Error finding container 2a6b8faeccb4be38edb00f8c76bae94948510f91c9e62d5085e322afe400727a: Status 404 returned error can't find the container with id 2a6b8faeccb4be38edb00f8c76bae94948510f91c9e62d5085e322afe400727a Jan 29 16:57:08 crc kubenswrapper[4895]: I0129 16:57:08.180348 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" event={"ID":"36973575-f9d7-4d47-b222-5072acf5317d","Type":"ContainerStarted","Data":"b66c8e545d56624d839dc1256f0077fc037484e0469d0e332139d4d379c0c0c9"} Jan 29 16:57:08 crc kubenswrapper[4895]: I0129 16:57:08.180894 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" event={"ID":"36973575-f9d7-4d47-b222-5072acf5317d","Type":"ContainerStarted","Data":"2a6b8faeccb4be38edb00f8c76bae94948510f91c9e62d5085e322afe400727a"} Jan 29 16:57:08 crc kubenswrapper[4895]: I0129 16:57:08.201975 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" podStartSLOduration=1.785274905 podStartE2EDuration="2.201947528s" podCreationTimestamp="2026-01-29 16:57:06 +0000 UTC" firstStartedPulling="2026-01-29 16:57:07.209315814 +0000 UTC m=+2711.012293078" lastFinishedPulling="2026-01-29 16:57:07.625988427 +0000 UTC m=+2711.428965701" observedRunningTime="2026-01-29 16:57:08.198499354 +0000 UTC m=+2712.001476638" watchObservedRunningTime="2026-01-29 16:57:08.201947528 +0000 UTC m=+2712.004924792" Jan 29 16:57:18 crc kubenswrapper[4895]: I0129 16:57:18.281997 4895 generic.go:334] "Generic (PLEG): container finished" podID="36973575-f9d7-4d47-b222-5072acf5317d" containerID="b66c8e545d56624d839dc1256f0077fc037484e0469d0e332139d4d379c0c0c9" exitCode=0 Jan 29 16:57:18 crc kubenswrapper[4895]: I0129 16:57:18.282086 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" event={"ID":"36973575-f9d7-4d47-b222-5072acf5317d","Type":"ContainerDied","Data":"b66c8e545d56624d839dc1256f0077fc037484e0469d0e332139d4d379c0c0c9"} Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.791276 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.865310 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-inventory-0\") pod \"36973575-f9d7-4d47-b222-5072acf5317d\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.865372 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ssh-key-openstack-edpm-ipam\") pod \"36973575-f9d7-4d47-b222-5072acf5317d\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.865551 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mh4r\" (UniqueName: \"kubernetes.io/projected/36973575-f9d7-4d47-b222-5072acf5317d-kube-api-access-7mh4r\") pod \"36973575-f9d7-4d47-b222-5072acf5317d\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.865586 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ceph\") pod \"36973575-f9d7-4d47-b222-5072acf5317d\" (UID: \"36973575-f9d7-4d47-b222-5072acf5317d\") " Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.872771 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36973575-f9d7-4d47-b222-5072acf5317d-kube-api-access-7mh4r" (OuterVolumeSpecName: "kube-api-access-7mh4r") pod "36973575-f9d7-4d47-b222-5072acf5317d" (UID: "36973575-f9d7-4d47-b222-5072acf5317d"). InnerVolumeSpecName "kube-api-access-7mh4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.872917 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ceph" (OuterVolumeSpecName: "ceph") pod "36973575-f9d7-4d47-b222-5072acf5317d" (UID: "36973575-f9d7-4d47-b222-5072acf5317d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.901237 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "36973575-f9d7-4d47-b222-5072acf5317d" (UID: "36973575-f9d7-4d47-b222-5072acf5317d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.903056 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "36973575-f9d7-4d47-b222-5072acf5317d" (UID: "36973575-f9d7-4d47-b222-5072acf5317d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.968074 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mh4r\" (UniqueName: \"kubernetes.io/projected/36973575-f9d7-4d47-b222-5072acf5317d-kube-api-access-7mh4r\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.968116 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.968128 4895 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:19 crc kubenswrapper[4895]: I0129 16:57:19.968142 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36973575-f9d7-4d47-b222-5072acf5317d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.303486 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" event={"ID":"36973575-f9d7-4d47-b222-5072acf5317d","Type":"ContainerDied","Data":"2a6b8faeccb4be38edb00f8c76bae94948510f91c9e62d5085e322afe400727a"} Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.304085 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a6b8faeccb4be38edb00f8c76bae94948510f91c9e62d5085e322afe400727a" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.303563 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9gqfn" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.409169 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7"] Jan 29 16:57:20 crc kubenswrapper[4895]: E0129 16:57:20.409715 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36973575-f9d7-4d47-b222-5072acf5317d" containerName="ssh-known-hosts-edpm-deployment" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.409742 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="36973575-f9d7-4d47-b222-5072acf5317d" containerName="ssh-known-hosts-edpm-deployment" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.409996 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="36973575-f9d7-4d47-b222-5072acf5317d" containerName="ssh-known-hosts-edpm-deployment" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.410927 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.413277 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.413484 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.414525 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.414666 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.414957 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.423062 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7"] Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.482238 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5grqb\" (UniqueName: \"kubernetes.io/projected/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-kube-api-access-5grqb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.482455 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.482580 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.482636 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.584366 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5grqb\" (UniqueName: \"kubernetes.io/projected/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-kube-api-access-5grqb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.584514 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.584587 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.584668 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.590737 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.590802 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.591443 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.601720 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5grqb\" (UniqueName: \"kubernetes.io/projected/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-kube-api-access-5grqb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm4c7\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:20 crc kubenswrapper[4895]: I0129 16:57:20.746833 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:21 crc kubenswrapper[4895]: I0129 16:57:21.328912 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7"] Jan 29 16:57:22 crc kubenswrapper[4895]: I0129 16:57:22.326598 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" event={"ID":"75cbd8da-e8b9-4d15-b092-e0bb97e177d0","Type":"ContainerStarted","Data":"b0a30c540fb38571422675c4ce6f1d43c4be211c88cc8c52af16c9404ce4e28f"} Jan 29 16:57:22 crc kubenswrapper[4895]: I0129 16:57:22.327459 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" event={"ID":"75cbd8da-e8b9-4d15-b092-e0bb97e177d0","Type":"ContainerStarted","Data":"2f3230a8a9a795620f1dfbcbce1971cba46d014047c88813f0f90c02014aedad"} Jan 29 16:57:22 crc kubenswrapper[4895]: I0129 16:57:22.351462 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" podStartSLOduration=1.883315888 podStartE2EDuration="2.351437046s" podCreationTimestamp="2026-01-29 16:57:20 +0000 UTC" firstStartedPulling="2026-01-29 16:57:21.334593687 +0000 UTC m=+2725.137570951" lastFinishedPulling="2026-01-29 16:57:21.802714845 +0000 UTC m=+2725.605692109" observedRunningTime="2026-01-29 16:57:22.345839174 +0000 UTC m=+2726.148816438" watchObservedRunningTime="2026-01-29 16:57:22.351437046 +0000 UTC m=+2726.154414330" Jan 29 16:57:29 crc kubenswrapper[4895]: I0129 16:57:29.398592 4895 generic.go:334] "Generic (PLEG): container finished" podID="75cbd8da-e8b9-4d15-b092-e0bb97e177d0" containerID="b0a30c540fb38571422675c4ce6f1d43c4be211c88cc8c52af16c9404ce4e28f" exitCode=0 Jan 29 16:57:29 crc kubenswrapper[4895]: I0129 16:57:29.398664 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" event={"ID":"75cbd8da-e8b9-4d15-b092-e0bb97e177d0","Type":"ContainerDied","Data":"b0a30c540fb38571422675c4ce6f1d43c4be211c88cc8c52af16c9404ce4e28f"} Jan 29 16:57:30 crc kubenswrapper[4895]: I0129 16:57:30.972703 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.102643 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5grqb\" (UniqueName: \"kubernetes.io/projected/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-kube-api-access-5grqb\") pod \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.102968 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ssh-key-openstack-edpm-ipam\") pod \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.103064 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ceph\") pod \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.103249 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-inventory\") pod \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\" (UID: \"75cbd8da-e8b9-4d15-b092-e0bb97e177d0\") " Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.112329 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ceph" (OuterVolumeSpecName: "ceph") pod "75cbd8da-e8b9-4d15-b092-e0bb97e177d0" (UID: "75cbd8da-e8b9-4d15-b092-e0bb97e177d0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.112584 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-kube-api-access-5grqb" (OuterVolumeSpecName: "kube-api-access-5grqb") pod "75cbd8da-e8b9-4d15-b092-e0bb97e177d0" (UID: "75cbd8da-e8b9-4d15-b092-e0bb97e177d0"). InnerVolumeSpecName "kube-api-access-5grqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.135598 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "75cbd8da-e8b9-4d15-b092-e0bb97e177d0" (UID: "75cbd8da-e8b9-4d15-b092-e0bb97e177d0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.154610 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-inventory" (OuterVolumeSpecName: "inventory") pod "75cbd8da-e8b9-4d15-b092-e0bb97e177d0" (UID: "75cbd8da-e8b9-4d15-b092-e0bb97e177d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.206686 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.207552 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5grqb\" (UniqueName: \"kubernetes.io/projected/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-kube-api-access-5grqb\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.207596 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.207607 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75cbd8da-e8b9-4d15-b092-e0bb97e177d0-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.419298 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" event={"ID":"75cbd8da-e8b9-4d15-b092-e0bb97e177d0","Type":"ContainerDied","Data":"2f3230a8a9a795620f1dfbcbce1971cba46d014047c88813f0f90c02014aedad"} Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.419346 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f3230a8a9a795620f1dfbcbce1971cba46d014047c88813f0f90c02014aedad" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.419401 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm4c7" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.503645 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2"] Jan 29 16:57:31 crc kubenswrapper[4895]: E0129 16:57:31.504228 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75cbd8da-e8b9-4d15-b092-e0bb97e177d0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.504253 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="75cbd8da-e8b9-4d15-b092-e0bb97e177d0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.504525 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="75cbd8da-e8b9-4d15-b092-e0bb97e177d0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.505362 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.507564 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.507752 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.509354 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.509452 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.509658 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.514704 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48m7q\" (UniqueName: \"kubernetes.io/projected/a521de95-49f8-451c-9d1f-0e938e4c3aa5-kube-api-access-48m7q\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.514891 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.515010 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.515122 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.525433 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2"] Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.620003 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48m7q\" (UniqueName: \"kubernetes.io/projected/a521de95-49f8-451c-9d1f-0e938e4c3aa5-kube-api-access-48m7q\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.620173 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.620230 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.620273 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.627284 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.627853 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.629808 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.665607 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48m7q\" (UniqueName: \"kubernetes.io/projected/a521de95-49f8-451c-9d1f-0e938e4c3aa5-kube-api-access-48m7q\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:31 crc kubenswrapper[4895]: I0129 16:57:31.841416 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:32 crc kubenswrapper[4895]: I0129 16:57:32.447194 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2"] Jan 29 16:57:33 crc kubenswrapper[4895]: I0129 16:57:33.443609 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" event={"ID":"a521de95-49f8-451c-9d1f-0e938e4c3aa5","Type":"ContainerStarted","Data":"abc512212a78abffebbe48857ac0c0ba650da1d42d4d6c0b464c19d409fa9b39"} Jan 29 16:57:33 crc kubenswrapper[4895]: I0129 16:57:33.444246 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" event={"ID":"a521de95-49f8-451c-9d1f-0e938e4c3aa5","Type":"ContainerStarted","Data":"03578c4ef203787424ba60bbe06436c1641e0b049483ecafc5df1dac73eaea92"} Jan 29 16:57:43 crc kubenswrapper[4895]: I0129 16:57:43.533032 4895 generic.go:334] "Generic (PLEG): container finished" podID="a521de95-49f8-451c-9d1f-0e938e4c3aa5" containerID="abc512212a78abffebbe48857ac0c0ba650da1d42d4d6c0b464c19d409fa9b39" exitCode=0 Jan 29 16:57:43 crc kubenswrapper[4895]: I0129 16:57:43.533113 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" event={"ID":"a521de95-49f8-451c-9d1f-0e938e4c3aa5","Type":"ContainerDied","Data":"abc512212a78abffebbe48857ac0c0ba650da1d42d4d6c0b464c19d409fa9b39"} Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.024521 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.100056 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-inventory\") pod \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.100268 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48m7q\" (UniqueName: \"kubernetes.io/projected/a521de95-49f8-451c-9d1f-0e938e4c3aa5-kube-api-access-48m7q\") pod \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.100371 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ssh-key-openstack-edpm-ipam\") pod \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.100400 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ceph\") pod \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\" (UID: \"a521de95-49f8-451c-9d1f-0e938e4c3aa5\") " Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.106083 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a521de95-49f8-451c-9d1f-0e938e4c3aa5-kube-api-access-48m7q" (OuterVolumeSpecName: "kube-api-access-48m7q") pod "a521de95-49f8-451c-9d1f-0e938e4c3aa5" (UID: "a521de95-49f8-451c-9d1f-0e938e4c3aa5"). InnerVolumeSpecName "kube-api-access-48m7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.106665 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ceph" (OuterVolumeSpecName: "ceph") pod "a521de95-49f8-451c-9d1f-0e938e4c3aa5" (UID: "a521de95-49f8-451c-9d1f-0e938e4c3aa5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.130723 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a521de95-49f8-451c-9d1f-0e938e4c3aa5" (UID: "a521de95-49f8-451c-9d1f-0e938e4c3aa5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.134099 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-inventory" (OuterVolumeSpecName: "inventory") pod "a521de95-49f8-451c-9d1f-0e938e4c3aa5" (UID: "a521de95-49f8-451c-9d1f-0e938e4c3aa5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.202537 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.202607 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48m7q\" (UniqueName: \"kubernetes.io/projected/a521de95-49f8-451c-9d1f-0e938e4c3aa5-kube-api-access-48m7q\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.202618 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.202626 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a521de95-49f8-451c-9d1f-0e938e4c3aa5-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.550799 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" event={"ID":"a521de95-49f8-451c-9d1f-0e938e4c3aa5","Type":"ContainerDied","Data":"03578c4ef203787424ba60bbe06436c1641e0b049483ecafc5df1dac73eaea92"} Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.550862 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.550861 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03578c4ef203787424ba60bbe06436c1641e0b049483ecafc5df1dac73eaea92" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.640919 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4"] Jan 29 16:57:45 crc kubenswrapper[4895]: E0129 16:57:45.641408 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a521de95-49f8-451c-9d1f-0e938e4c3aa5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.641444 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a521de95-49f8-451c-9d1f-0e938e4c3aa5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.641741 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a521de95-49f8-451c-9d1f-0e938e4c3aa5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.644395 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.654503 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.666541 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.666765 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.666818 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.666926 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.667056 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.666942 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.667081 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.676758 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4"] Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.818799 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.819237 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.819329 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.819577 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.819760 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.819794 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.819918 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzd6m\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-kube-api-access-mzd6m\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.820195 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.820410 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.820490 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.820558 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.820686 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.820715 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923122 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923201 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923223 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923258 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzd6m\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-kube-api-access-mzd6m\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923301 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923334 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923352 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923375 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923409 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923427 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923458 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923481 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.923498 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.928340 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.928756 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.929686 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.929827 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.931288 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.932655 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.933299 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.933779 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.933791 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.935408 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.936582 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.941310 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.944928 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzd6m\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-kube-api-access-mzd6m\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-52ns4\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:45 crc kubenswrapper[4895]: I0129 16:57:45.963547 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:57:46 crc kubenswrapper[4895]: I0129 16:57:46.496307 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4"] Jan 29 16:57:46 crc kubenswrapper[4895]: W0129 16:57:46.504469 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb749d017_f562_4021_9a9d_474569a1400e.slice/crio-f293e2ef1e1af8d37ecd6738d46a84a4470f3587ea34efd0351d6aca0f33ff56 WatchSource:0}: Error finding container f293e2ef1e1af8d37ecd6738d46a84a4470f3587ea34efd0351d6aca0f33ff56: Status 404 returned error can't find the container with id f293e2ef1e1af8d37ecd6738d46a84a4470f3587ea34efd0351d6aca0f33ff56 Jan 29 16:57:46 crc kubenswrapper[4895]: I0129 16:57:46.560179 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" event={"ID":"b749d017-f562-4021-9a9d-474569a1400e","Type":"ContainerStarted","Data":"f293e2ef1e1af8d37ecd6738d46a84a4470f3587ea34efd0351d6aca0f33ff56"} Jan 29 16:57:47 crc kubenswrapper[4895]: I0129 16:57:47.572175 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" event={"ID":"b749d017-f562-4021-9a9d-474569a1400e","Type":"ContainerStarted","Data":"158bd840254d9dc37fd87a700b3c951783d8910614366a244e84155819e0848b"} Jan 29 16:57:47 crc kubenswrapper[4895]: I0129 16:57:47.602384 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" podStartSLOduration=2.144214366 podStartE2EDuration="2.602340862s" podCreationTimestamp="2026-01-29 16:57:45 +0000 UTC" firstStartedPulling="2026-01-29 16:57:46.507992193 +0000 UTC m=+2750.310969457" lastFinishedPulling="2026-01-29 16:57:46.966118689 +0000 UTC m=+2750.769095953" observedRunningTime="2026-01-29 16:57:47.591231601 +0000 UTC m=+2751.394208885" watchObservedRunningTime="2026-01-29 16:57:47.602340862 +0000 UTC m=+2751.405318126" Jan 29 16:58:15 crc kubenswrapper[4895]: E0129 16:58:15.458177 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb749d017_f562_4021_9a9d_474569a1400e.slice/crio-conmon-158bd840254d9dc37fd87a700b3c951783d8910614366a244e84155819e0848b.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:58:16 crc kubenswrapper[4895]: I0129 16:58:16.230100 4895 generic.go:334] "Generic (PLEG): container finished" podID="b749d017-f562-4021-9a9d-474569a1400e" containerID="158bd840254d9dc37fd87a700b3c951783d8910614366a244e84155819e0848b" exitCode=0 Jan 29 16:58:16 crc kubenswrapper[4895]: I0129 16:58:16.230174 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" event={"ID":"b749d017-f562-4021-9a9d-474569a1400e","Type":"ContainerDied","Data":"158bd840254d9dc37fd87a700b3c951783d8910614366a244e84155819e0848b"} Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.698977 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.815021 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-neutron-metadata-combined-ca-bundle\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.815405 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-bootstrap-combined-ca-bundle\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.815444 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ssh-key-openstack-edpm-ipam\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.815469 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.815508 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.815555 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.815584 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-inventory\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.815600 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-repo-setup-combined-ca-bundle\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.815624 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-nova-combined-ca-bundle\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.815647 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzd6m\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-kube-api-access-mzd6m\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.816788 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ovn-combined-ca-bundle\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.816932 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-libvirt-combined-ca-bundle\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.817268 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ceph\") pod \"b749d017-f562-4021-9a9d-474569a1400e\" (UID: \"b749d017-f562-4021-9a9d-474569a1400e\") " Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.822704 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.823941 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.824001 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.823852 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.824479 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.824518 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ceph" (OuterVolumeSpecName: "ceph") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.825162 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.825679 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-kube-api-access-mzd6m" (OuterVolumeSpecName: "kube-api-access-mzd6m") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "kube-api-access-mzd6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.826022 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.827564 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.828484 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.846836 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-inventory" (OuterVolumeSpecName: "inventory") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.863220 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b749d017-f562-4021-9a9d-474569a1400e" (UID: "b749d017-f562-4021-9a9d-474569a1400e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.920799 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.920943 4895 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.920963 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.920980 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.920994 4895 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.921031 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.921043 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.921056 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.921069 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.921110 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.921124 4895 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.921136 4895 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b749d017-f562-4021-9a9d-474569a1400e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:17 crc kubenswrapper[4895]: I0129 16:58:17.921150 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzd6m\" (UniqueName: \"kubernetes.io/projected/b749d017-f562-4021-9a9d-474569a1400e-kube-api-access-mzd6m\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.254962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" event={"ID":"b749d017-f562-4021-9a9d-474569a1400e","Type":"ContainerDied","Data":"f293e2ef1e1af8d37ecd6738d46a84a4470f3587ea34efd0351d6aca0f33ff56"} Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.255020 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f293e2ef1e1af8d37ecd6738d46a84a4470f3587ea34efd0351d6aca0f33ff56" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.255038 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-52ns4" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.366673 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk"] Jan 29 16:58:18 crc kubenswrapper[4895]: E0129 16:58:18.367296 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b749d017-f562-4021-9a9d-474569a1400e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.367324 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="b749d017-f562-4021-9a9d-474569a1400e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.367531 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="b749d017-f562-4021-9a9d-474569a1400e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.368385 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.372300 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.372569 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.372911 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.373054 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.373524 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.381010 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk"] Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.533893 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.534246 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnbdh\" (UniqueName: \"kubernetes.io/projected/f6e8674c-e754-4883-a39e-a77c2ae8cf02-kube-api-access-vnbdh\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.534515 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.534817 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.636302 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.636388 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.636464 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.636510 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnbdh\" (UniqueName: \"kubernetes.io/projected/f6e8674c-e754-4883-a39e-a77c2ae8cf02-kube-api-access-vnbdh\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.640342 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.640569 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.642911 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.656337 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnbdh\" (UniqueName: \"kubernetes.io/projected/f6e8674c-e754-4883-a39e-a77c2ae8cf02-kube-api-access-vnbdh\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-slstk\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:18 crc kubenswrapper[4895]: I0129 16:58:18.691493 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:19 crc kubenswrapper[4895]: I0129 16:58:19.230625 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk"] Jan 29 16:58:19 crc kubenswrapper[4895]: I0129 16:58:19.241544 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:58:19 crc kubenswrapper[4895]: I0129 16:58:19.264927 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" event={"ID":"f6e8674c-e754-4883-a39e-a77c2ae8cf02","Type":"ContainerStarted","Data":"51103df975304e852f3f88488b920d2bb684f2d5c29002d1d47c286010973863"} Jan 29 16:58:20 crc kubenswrapper[4895]: I0129 16:58:20.276521 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" event={"ID":"f6e8674c-e754-4883-a39e-a77c2ae8cf02","Type":"ContainerStarted","Data":"643d7afa8edaa7b9583f62f701b177471624d313e78a319f1fdac91b5912f0f2"} Jan 29 16:58:20 crc kubenswrapper[4895]: I0129 16:58:20.292558 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" podStartSLOduration=1.8298787349999999 podStartE2EDuration="2.292536864s" podCreationTimestamp="2026-01-29 16:58:18 +0000 UTC" firstStartedPulling="2026-01-29 16:58:19.241272323 +0000 UTC m=+2783.044249587" lastFinishedPulling="2026-01-29 16:58:19.703930462 +0000 UTC m=+2783.506907716" observedRunningTime="2026-01-29 16:58:20.290160761 +0000 UTC m=+2784.093138025" watchObservedRunningTime="2026-01-29 16:58:20.292536864 +0000 UTC m=+2784.095514128" Jan 29 16:58:25 crc kubenswrapper[4895]: I0129 16:58:25.324933 4895 generic.go:334] "Generic (PLEG): container finished" podID="f6e8674c-e754-4883-a39e-a77c2ae8cf02" containerID="643d7afa8edaa7b9583f62f701b177471624d313e78a319f1fdac91b5912f0f2" exitCode=0 Jan 29 16:58:25 crc kubenswrapper[4895]: I0129 16:58:25.325043 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" event={"ID":"f6e8674c-e754-4883-a39e-a77c2ae8cf02","Type":"ContainerDied","Data":"643d7afa8edaa7b9583f62f701b177471624d313e78a319f1fdac91b5912f0f2"} Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.690524 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.820335 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-inventory\") pod \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.820434 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnbdh\" (UniqueName: \"kubernetes.io/projected/f6e8674c-e754-4883-a39e-a77c2ae8cf02-kube-api-access-vnbdh\") pod \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.820715 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ceph\") pod \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.821679 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ssh-key-openstack-edpm-ipam\") pod \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\" (UID: \"f6e8674c-e754-4883-a39e-a77c2ae8cf02\") " Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.827683 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e8674c-e754-4883-a39e-a77c2ae8cf02-kube-api-access-vnbdh" (OuterVolumeSpecName: "kube-api-access-vnbdh") pod "f6e8674c-e754-4883-a39e-a77c2ae8cf02" (UID: "f6e8674c-e754-4883-a39e-a77c2ae8cf02"). InnerVolumeSpecName "kube-api-access-vnbdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.829128 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ceph" (OuterVolumeSpecName: "ceph") pod "f6e8674c-e754-4883-a39e-a77c2ae8cf02" (UID: "f6e8674c-e754-4883-a39e-a77c2ae8cf02"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.850025 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-inventory" (OuterVolumeSpecName: "inventory") pod "f6e8674c-e754-4883-a39e-a77c2ae8cf02" (UID: "f6e8674c-e754-4883-a39e-a77c2ae8cf02"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.850581 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f6e8674c-e754-4883-a39e-a77c2ae8cf02" (UID: "f6e8674c-e754-4883-a39e-a77c2ae8cf02"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.924858 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.924938 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.924955 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6e8674c-e754-4883-a39e-a77c2ae8cf02-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4895]: I0129 16:58:26.924967 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnbdh\" (UniqueName: \"kubernetes.io/projected/f6e8674c-e754-4883-a39e-a77c2ae8cf02-kube-api-access-vnbdh\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.118288 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lrx8j"] Jan 29 16:58:27 crc kubenswrapper[4895]: E0129 16:58:27.118664 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e8674c-e754-4883-a39e-a77c2ae8cf02" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.118682 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e8674c-e754-4883-a39e-a77c2ae8cf02" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.118900 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e8674c-e754-4883-a39e-a77c2ae8cf02" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.120118 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.136960 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrx8j"] Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.231310 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-utilities\") pod \"redhat-marketplace-lrx8j\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.231719 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szcmv\" (UniqueName: \"kubernetes.io/projected/5b85df56-a33e-4596-bddf-1a7da0dece65-kube-api-access-szcmv\") pod \"redhat-marketplace-lrx8j\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.231807 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-catalog-content\") pod \"redhat-marketplace-lrx8j\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.333670 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szcmv\" (UniqueName: \"kubernetes.io/projected/5b85df56-a33e-4596-bddf-1a7da0dece65-kube-api-access-szcmv\") pod \"redhat-marketplace-lrx8j\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.333728 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-catalog-content\") pod \"redhat-marketplace-lrx8j\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.333800 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-utilities\") pod \"redhat-marketplace-lrx8j\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.334585 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-catalog-content\") pod \"redhat-marketplace-lrx8j\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.334684 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-utilities\") pod \"redhat-marketplace-lrx8j\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.343877 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" event={"ID":"f6e8674c-e754-4883-a39e-a77c2ae8cf02","Type":"ContainerDied","Data":"51103df975304e852f3f88488b920d2bb684f2d5c29002d1d47c286010973863"} Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.343925 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51103df975304e852f3f88488b920d2bb684f2d5c29002d1d47c286010973863" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.343985 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-slstk" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.355371 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szcmv\" (UniqueName: \"kubernetes.io/projected/5b85df56-a33e-4596-bddf-1a7da0dece65-kube-api-access-szcmv\") pod \"redhat-marketplace-lrx8j\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.440474 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt"] Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.440629 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.447085 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.450461 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.450500 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.450688 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.450727 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.450804 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.455259 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.464228 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt"] Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.539649 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlhdc\" (UniqueName: \"kubernetes.io/projected/27c14832-5dad-4502-ac5a-5f2cd24d7874-kube-api-access-vlhdc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.539758 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.539824 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.539847 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.539921 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.540095 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.642668 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlhdc\" (UniqueName: \"kubernetes.io/projected/27c14832-5dad-4502-ac5a-5f2cd24d7874-kube-api-access-vlhdc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.643073 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.643116 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.643137 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.643177 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.643201 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.645507 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.656272 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.656457 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.656538 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.662482 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.667771 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlhdc\" (UniqueName: \"kubernetes.io/projected/27c14832-5dad-4502-ac5a-5f2cd24d7874-kube-api-access-vlhdc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t8zxt\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.768700 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:58:27 crc kubenswrapper[4895]: I0129 16:58:27.973843 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrx8j"] Jan 29 16:58:28 crc kubenswrapper[4895]: I0129 16:58:28.091794 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt"] Jan 29 16:58:28 crc kubenswrapper[4895]: I0129 16:58:28.357735 4895 generic.go:334] "Generic (PLEG): container finished" podID="5b85df56-a33e-4596-bddf-1a7da0dece65" containerID="d3b9d45e362c790a67275d038f63381c6f86d2880eda0d65e3cfe6bcc8fc5966" exitCode=0 Jan 29 16:58:28 crc kubenswrapper[4895]: I0129 16:58:28.357811 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrx8j" event={"ID":"5b85df56-a33e-4596-bddf-1a7da0dece65","Type":"ContainerDied","Data":"d3b9d45e362c790a67275d038f63381c6f86d2880eda0d65e3cfe6bcc8fc5966"} Jan 29 16:58:28 crc kubenswrapper[4895]: I0129 16:58:28.357907 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrx8j" event={"ID":"5b85df56-a33e-4596-bddf-1a7da0dece65","Type":"ContainerStarted","Data":"e780fc7dfd8d793a84ae27b296c296f2ea98c3247425e65ae2096251056c67a5"} Jan 29 16:58:28 crc kubenswrapper[4895]: I0129 16:58:28.360433 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" event={"ID":"27c14832-5dad-4502-ac5a-5f2cd24d7874","Type":"ContainerStarted","Data":"d05559312d2c2b84ca4c1b48a27ac896c2894bfc9097d33c54fe75be1f3fcaa5"} Jan 29 16:58:28 crc kubenswrapper[4895]: E0129 16:58:28.485615 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:58:28 crc kubenswrapper[4895]: E0129 16:58:28.485785 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szcmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lrx8j_openshift-marketplace(5b85df56-a33e-4596-bddf-1a7da0dece65): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:28 crc kubenswrapper[4895]: E0129 16:58:28.486950 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 16:58:29 crc kubenswrapper[4895]: I0129 16:58:29.372225 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" event={"ID":"27c14832-5dad-4502-ac5a-5f2cd24d7874","Type":"ContainerStarted","Data":"bd7f5b3dd17a9feb9ecd669c7e612644e09c2a87447a55b09b892798b871441d"} Jan 29 16:58:29 crc kubenswrapper[4895]: E0129 16:58:29.374227 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 16:58:29 crc kubenswrapper[4895]: I0129 16:58:29.450749 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" podStartSLOduration=1.789505696 podStartE2EDuration="2.450690996s" podCreationTimestamp="2026-01-29 16:58:27 +0000 UTC" firstStartedPulling="2026-01-29 16:58:28.09563792 +0000 UTC m=+2791.898615184" lastFinishedPulling="2026-01-29 16:58:28.75682322 +0000 UTC m=+2792.559800484" observedRunningTime="2026-01-29 16:58:29.413657771 +0000 UTC m=+2793.216635075" watchObservedRunningTime="2026-01-29 16:58:29.450690996 +0000 UTC m=+2793.253668280" Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.094555 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c9dxd"] Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.097857 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.106977 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c9dxd"] Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.133138 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-catalog-content\") pod \"redhat-operators-c9dxd\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.133431 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8r64\" (UniqueName: \"kubernetes.io/projected/29e9ec80-fcd0-4eca-8c96-01a531355911-kube-api-access-t8r64\") pod \"redhat-operators-c9dxd\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.133475 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-utilities\") pod \"redhat-operators-c9dxd\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.234044 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-catalog-content\") pod \"redhat-operators-c9dxd\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.234204 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8r64\" (UniqueName: \"kubernetes.io/projected/29e9ec80-fcd0-4eca-8c96-01a531355911-kube-api-access-t8r64\") pod \"redhat-operators-c9dxd\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.234245 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-utilities\") pod \"redhat-operators-c9dxd\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.234736 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-utilities\") pod \"redhat-operators-c9dxd\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.234939 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-catalog-content\") pod \"redhat-operators-c9dxd\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.261262 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8r64\" (UniqueName: \"kubernetes.io/projected/29e9ec80-fcd0-4eca-8c96-01a531355911-kube-api-access-t8r64\") pod \"redhat-operators-c9dxd\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.427064 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 16:58:32 crc kubenswrapper[4895]: W0129 16:58:32.912169 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29e9ec80_fcd0_4eca_8c96_01a531355911.slice/crio-470a47a5ae21df9b5776f7e2d893a33fcd1c9a1378cde95f913035fb53ac4644 WatchSource:0}: Error finding container 470a47a5ae21df9b5776f7e2d893a33fcd1c9a1378cde95f913035fb53ac4644: Status 404 returned error can't find the container with id 470a47a5ae21df9b5776f7e2d893a33fcd1c9a1378cde95f913035fb53ac4644 Jan 29 16:58:32 crc kubenswrapper[4895]: I0129 16:58:32.918530 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c9dxd"] Jan 29 16:58:33 crc kubenswrapper[4895]: I0129 16:58:33.413968 4895 generic.go:334] "Generic (PLEG): container finished" podID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerID="f509660da55fdc56d583eebc1bdebf15b3c280e6e4dbe29d0711b46bb9c11359" exitCode=0 Jan 29 16:58:33 crc kubenswrapper[4895]: I0129 16:58:33.414090 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9dxd" event={"ID":"29e9ec80-fcd0-4eca-8c96-01a531355911","Type":"ContainerDied","Data":"f509660da55fdc56d583eebc1bdebf15b3c280e6e4dbe29d0711b46bb9c11359"} Jan 29 16:58:33 crc kubenswrapper[4895]: I0129 16:58:33.414333 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9dxd" event={"ID":"29e9ec80-fcd0-4eca-8c96-01a531355911","Type":"ContainerStarted","Data":"470a47a5ae21df9b5776f7e2d893a33fcd1c9a1378cde95f913035fb53ac4644"} Jan 29 16:58:33 crc kubenswrapper[4895]: E0129 16:58:33.548083 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:58:33 crc kubenswrapper[4895]: E0129 16:58:33.548276 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8r64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-c9dxd_openshift-marketplace(29e9ec80-fcd0-4eca-8c96-01a531355911): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:33 crc kubenswrapper[4895]: E0129 16:58:33.550325 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 16:58:34 crc kubenswrapper[4895]: E0129 16:58:34.426953 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 16:58:45 crc kubenswrapper[4895]: E0129 16:58:45.170106 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:58:45 crc kubenswrapper[4895]: E0129 16:58:45.172018 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szcmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lrx8j_openshift-marketplace(5b85df56-a33e-4596-bddf-1a7da0dece65): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:45 crc kubenswrapper[4895]: E0129 16:58:45.173480 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 16:58:45 crc kubenswrapper[4895]: E0129 16:58:45.210171 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:58:45 crc kubenswrapper[4895]: E0129 16:58:45.210531 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8r64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-c9dxd_openshift-marketplace(29e9ec80-fcd0-4eca-8c96-01a531355911): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:45 crc kubenswrapper[4895]: E0129 16:58:45.211777 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.559934 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2zbgz"] Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.563045 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zbgz" Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.605996 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2zbgz"] Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.618548 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-utilities\") pod \"community-operators-2zbgz\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " pod="openshift-marketplace/community-operators-2zbgz" Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.618757 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj5zt\" (UniqueName: \"kubernetes.io/projected/6337deb0-d51e-4fa1-8aab-24cebc2988c2-kube-api-access-jj5zt\") pod \"community-operators-2zbgz\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " pod="openshift-marketplace/community-operators-2zbgz" Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.618806 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-catalog-content\") pod \"community-operators-2zbgz\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " pod="openshift-marketplace/community-operators-2zbgz" Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.721462 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-utilities\") pod \"community-operators-2zbgz\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " pod="openshift-marketplace/community-operators-2zbgz" Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.721601 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj5zt\" (UniqueName: \"kubernetes.io/projected/6337deb0-d51e-4fa1-8aab-24cebc2988c2-kube-api-access-jj5zt\") pod \"community-operators-2zbgz\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " pod="openshift-marketplace/community-operators-2zbgz" Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.721640 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-catalog-content\") pod \"community-operators-2zbgz\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " pod="openshift-marketplace/community-operators-2zbgz" Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.722178 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-utilities\") pod \"community-operators-2zbgz\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " pod="openshift-marketplace/community-operators-2zbgz" Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.722299 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-catalog-content\") pod \"community-operators-2zbgz\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " pod="openshift-marketplace/community-operators-2zbgz" Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.749855 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj5zt\" (UniqueName: \"kubernetes.io/projected/6337deb0-d51e-4fa1-8aab-24cebc2988c2-kube-api-access-jj5zt\") pod \"community-operators-2zbgz\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " pod="openshift-marketplace/community-operators-2zbgz" Jan 29 16:58:53 crc kubenswrapper[4895]: I0129 16:58:53.917910 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zbgz" Jan 29 16:58:54 crc kubenswrapper[4895]: I0129 16:58:54.461887 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2zbgz"] Jan 29 16:58:54 crc kubenswrapper[4895]: I0129 16:58:54.653141 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zbgz" event={"ID":"6337deb0-d51e-4fa1-8aab-24cebc2988c2","Type":"ContainerStarted","Data":"d6d9beee1e716c2040c990ee4a283de2f73514adea37abfbf39984d05d7d14d2"} Jan 29 16:58:55 crc kubenswrapper[4895]: I0129 16:58:55.667622 4895 generic.go:334] "Generic (PLEG): container finished" podID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" containerID="7a68ee92b965b04e91237cf617faf70bf18eb48d60049aaea394d183b7919922" exitCode=0 Jan 29 16:58:55 crc kubenswrapper[4895]: I0129 16:58:55.667689 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zbgz" event={"ID":"6337deb0-d51e-4fa1-8aab-24cebc2988c2","Type":"ContainerDied","Data":"7a68ee92b965b04e91237cf617faf70bf18eb48d60049aaea394d183b7919922"} Jan 29 16:58:55 crc kubenswrapper[4895]: E0129 16:58:55.802179 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:58:55 crc kubenswrapper[4895]: E0129 16:58:55.802765 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj5zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2zbgz_openshift-marketplace(6337deb0-d51e-4fa1-8aab-24cebc2988c2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:55 crc kubenswrapper[4895]: E0129 16:58:55.804128 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 16:58:56 crc kubenswrapper[4895]: E0129 16:58:56.679015 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 16:58:59 crc kubenswrapper[4895]: E0129 16:58:59.039574 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 16:58:59 crc kubenswrapper[4895]: E0129 16:58:59.039746 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 16:59:07 crc kubenswrapper[4895]: E0129 16:59:07.199988 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:59:07 crc kubenswrapper[4895]: E0129 16:59:07.201071 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj5zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2zbgz_openshift-marketplace(6337deb0-d51e-4fa1-8aab-24cebc2988c2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:59:07 crc kubenswrapper[4895]: E0129 16:59:07.202341 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.804659 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.808185 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.828627 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.854471 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7xwj\" (UniqueName: \"kubernetes.io/projected/14f81c9c-0e13-446b-a525-370c39259440-kube-api-access-q7xwj\") pod \"certified-operators-8s2lq\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.854525 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-utilities\") pod \"certified-operators-8s2lq\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.854759 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-catalog-content\") pod \"certified-operators-8s2lq\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.958379 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7xwj\" (UniqueName: \"kubernetes.io/projected/14f81c9c-0e13-446b-a525-370c39259440-kube-api-access-q7xwj\") pod \"certified-operators-8s2lq\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.958458 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-utilities\") pod \"certified-operators-8s2lq\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.958544 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-catalog-content\") pod \"certified-operators-8s2lq\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.959207 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-catalog-content\") pod \"certified-operators-8s2lq\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.959275 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-utilities\") pod \"certified-operators-8s2lq\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 16:59:08 crc kubenswrapper[4895]: I0129 16:59:08.984982 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7xwj\" (UniqueName: \"kubernetes.io/projected/14f81c9c-0e13-446b-a525-370c39259440-kube-api-access-q7xwj\") pod \"certified-operators-8s2lq\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 16:59:09 crc kubenswrapper[4895]: I0129 16:59:09.146666 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 16:59:09 crc kubenswrapper[4895]: I0129 16:59:09.638086 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Jan 29 16:59:10 crc kubenswrapper[4895]: I0129 16:59:10.831143 4895 generic.go:334] "Generic (PLEG): container finished" podID="14f81c9c-0e13-446b-a525-370c39259440" containerID="28dcc3d50de794ccfcca70781bb83143642b8951a754af9eeedf02e40eb97d3b" exitCode=0 Jan 29 16:59:10 crc kubenswrapper[4895]: I0129 16:59:10.831262 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"14f81c9c-0e13-446b-a525-370c39259440","Type":"ContainerDied","Data":"28dcc3d50de794ccfcca70781bb83143642b8951a754af9eeedf02e40eb97d3b"} Jan 29 16:59:10 crc kubenswrapper[4895]: I0129 16:59:10.831641 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"14f81c9c-0e13-446b-a525-370c39259440","Type":"ContainerStarted","Data":"f00ad1f5c64bea0fd1148495f5a41a5ca85a7503accb01d56713056615748797"} Jan 29 16:59:10 crc kubenswrapper[4895]: E0129 16:59:10.986496 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:59:10 crc kubenswrapper[4895]: E0129 16:59:10.987297 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7xwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8s2lq_openshift-marketplace(14f81c9c-0e13-446b-a525-370c39259440): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:59:10 crc kubenswrapper[4895]: E0129 16:59:10.988538 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 16:59:11 crc kubenswrapper[4895]: E0129 16:59:11.842786 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 16:59:13 crc kubenswrapper[4895]: E0129 16:59:13.165661 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:59:13 crc kubenswrapper[4895]: E0129 16:59:13.167515 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8r64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-c9dxd_openshift-marketplace(29e9ec80-fcd0-4eca-8c96-01a531355911): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:59:13 crc kubenswrapper[4895]: E0129 16:59:13.168805 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 16:59:14 crc kubenswrapper[4895]: E0129 16:59:14.170925 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:59:14 crc kubenswrapper[4895]: E0129 16:59:14.171639 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szcmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lrx8j_openshift-marketplace(5b85df56-a33e-4596-bddf-1a7da0dece65): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:59:14 crc kubenswrapper[4895]: E0129 16:59:14.173793 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 16:59:19 crc kubenswrapper[4895]: E0129 16:59:19.041895 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 16:59:22 crc kubenswrapper[4895]: E0129 16:59:22.169267 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:59:22 crc kubenswrapper[4895]: E0129 16:59:22.170894 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7xwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8s2lq_openshift-marketplace(14f81c9c-0e13-446b-a525-370c39259440): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:59:22 crc kubenswrapper[4895]: E0129 16:59:22.172114 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 16:59:24 crc kubenswrapper[4895]: E0129 16:59:24.040675 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 16:59:25 crc kubenswrapper[4895]: E0129 16:59:25.039731 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 16:59:27 crc kubenswrapper[4895]: I0129 16:59:27.823301 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:59:27 crc kubenswrapper[4895]: I0129 16:59:27.823389 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:59:33 crc kubenswrapper[4895]: E0129 16:59:33.039052 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 16:59:34 crc kubenswrapper[4895]: E0129 16:59:34.214345 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:59:34 crc kubenswrapper[4895]: E0129 16:59:34.214592 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj5zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2zbgz_openshift-marketplace(6337deb0-d51e-4fa1-8aab-24cebc2988c2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:59:34 crc kubenswrapper[4895]: E0129 16:59:34.215828 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 16:59:36 crc kubenswrapper[4895]: E0129 16:59:36.038578 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 16:59:37 crc kubenswrapper[4895]: E0129 16:59:37.055335 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 16:59:37 crc kubenswrapper[4895]: I0129 16:59:37.086140 4895 generic.go:334] "Generic (PLEG): container finished" podID="27c14832-5dad-4502-ac5a-5f2cd24d7874" containerID="bd7f5b3dd17a9feb9ecd669c7e612644e09c2a87447a55b09b892798b871441d" exitCode=0 Jan 29 16:59:37 crc kubenswrapper[4895]: I0129 16:59:37.086242 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" event={"ID":"27c14832-5dad-4502-ac5a-5f2cd24d7874","Type":"ContainerDied","Data":"bd7f5b3dd17a9feb9ecd669c7e612644e09c2a87447a55b09b892798b871441d"} Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.524664 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.584108 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovn-combined-ca-bundle\") pod \"27c14832-5dad-4502-ac5a-5f2cd24d7874\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.584239 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlhdc\" (UniqueName: \"kubernetes.io/projected/27c14832-5dad-4502-ac5a-5f2cd24d7874-kube-api-access-vlhdc\") pod \"27c14832-5dad-4502-ac5a-5f2cd24d7874\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.584267 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-inventory\") pod \"27c14832-5dad-4502-ac5a-5f2cd24d7874\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.584445 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ceph\") pod \"27c14832-5dad-4502-ac5a-5f2cd24d7874\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.584475 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovncontroller-config-0\") pod \"27c14832-5dad-4502-ac5a-5f2cd24d7874\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.584617 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ssh-key-openstack-edpm-ipam\") pod \"27c14832-5dad-4502-ac5a-5f2cd24d7874\" (UID: \"27c14832-5dad-4502-ac5a-5f2cd24d7874\") " Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.593836 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c14832-5dad-4502-ac5a-5f2cd24d7874-kube-api-access-vlhdc" (OuterVolumeSpecName: "kube-api-access-vlhdc") pod "27c14832-5dad-4502-ac5a-5f2cd24d7874" (UID: "27c14832-5dad-4502-ac5a-5f2cd24d7874"). InnerVolumeSpecName "kube-api-access-vlhdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.597775 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ceph" (OuterVolumeSpecName: "ceph") pod "27c14832-5dad-4502-ac5a-5f2cd24d7874" (UID: "27c14832-5dad-4502-ac5a-5f2cd24d7874"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.606844 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "27c14832-5dad-4502-ac5a-5f2cd24d7874" (UID: "27c14832-5dad-4502-ac5a-5f2cd24d7874"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.615488 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27c14832-5dad-4502-ac5a-5f2cd24d7874" (UID: "27c14832-5dad-4502-ac5a-5f2cd24d7874"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.618303 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-inventory" (OuterVolumeSpecName: "inventory") pod "27c14832-5dad-4502-ac5a-5f2cd24d7874" (UID: "27c14832-5dad-4502-ac5a-5f2cd24d7874"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.621207 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "27c14832-5dad-4502-ac5a-5f2cd24d7874" (UID: "27c14832-5dad-4502-ac5a-5f2cd24d7874"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.687325 4895 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.687396 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.687413 4895 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.687429 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlhdc\" (UniqueName: \"kubernetes.io/projected/27c14832-5dad-4502-ac5a-5f2cd24d7874-kube-api-access-vlhdc\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.687442 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:38 crc kubenswrapper[4895]: I0129 16:59:38.687474 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/27c14832-5dad-4502-ac5a-5f2cd24d7874-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.105480 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" event={"ID":"27c14832-5dad-4502-ac5a-5f2cd24d7874","Type":"ContainerDied","Data":"d05559312d2c2b84ca4c1b48a27ac896c2894bfc9097d33c54fe75be1f3fcaa5"} Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.105527 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d05559312d2c2b84ca4c1b48a27ac896c2894bfc9097d33c54fe75be1f3fcaa5" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.106103 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t8zxt" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.255311 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7"] Jan 29 16:59:39 crc kubenswrapper[4895]: E0129 16:59:39.255710 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c14832-5dad-4502-ac5a-5f2cd24d7874" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.255730 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c14832-5dad-4502-ac5a-5f2cd24d7874" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.255970 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c14832-5dad-4502-ac5a-5f2cd24d7874" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.256768 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.261242 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.261303 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.261633 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.262140 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.262471 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.263175 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.265489 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.269469 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7"] Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.298943 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.298996 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.299036 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.299098 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.299164 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.299269 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnmsv\" (UniqueName: \"kubernetes.io/projected/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-kube-api-access-vnmsv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.299526 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.401275 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.401344 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.401423 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.401468 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnmsv\" (UniqueName: \"kubernetes.io/projected/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-kube-api-access-vnmsv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.401515 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.401688 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.401722 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.405426 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.405671 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.405706 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.406456 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.408066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.408681 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.420775 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnmsv\" (UniqueName: \"kubernetes.io/projected/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-kube-api-access-vnmsv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:39 crc kubenswrapper[4895]: I0129 16:59:39.584014 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 16:59:40 crc kubenswrapper[4895]: I0129 16:59:40.173346 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7"] Jan 29 16:59:40 crc kubenswrapper[4895]: W0129 16:59:40.175129 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd5e91b8_4378_437f_be31_9a86a5fc2ef7.slice/crio-0bd032b02bd0090dedc6afb733b0185773dd3f0fede3ec17dca7a897a1ae6897 WatchSource:0}: Error finding container 0bd032b02bd0090dedc6afb733b0185773dd3f0fede3ec17dca7a897a1ae6897: Status 404 returned error can't find the container with id 0bd032b02bd0090dedc6afb733b0185773dd3f0fede3ec17dca7a897a1ae6897 Jan 29 16:59:41 crc kubenswrapper[4895]: I0129 16:59:41.135704 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" event={"ID":"cd5e91b8-4378-437f-be31-9a86a5fc2ef7","Type":"ContainerStarted","Data":"a96110f988d31516d4de1b3d9455f1ea095aecd7a0c18f068606d2293c0eb502"} Jan 29 16:59:41 crc kubenswrapper[4895]: I0129 16:59:41.136644 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" event={"ID":"cd5e91b8-4378-437f-be31-9a86a5fc2ef7","Type":"ContainerStarted","Data":"0bd032b02bd0090dedc6afb733b0185773dd3f0fede3ec17dca7a897a1ae6897"} Jan 29 16:59:41 crc kubenswrapper[4895]: I0129 16:59:41.172925 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" podStartSLOduration=1.670542414 podStartE2EDuration="2.172900379s" podCreationTimestamp="2026-01-29 16:59:39 +0000 UTC" firstStartedPulling="2026-01-29 16:59:40.177644965 +0000 UTC m=+2863.980622239" lastFinishedPulling="2026-01-29 16:59:40.68000294 +0000 UTC m=+2864.482980204" observedRunningTime="2026-01-29 16:59:41.159813724 +0000 UTC m=+2864.962791008" watchObservedRunningTime="2026-01-29 16:59:41.172900379 +0000 UTC m=+2864.975877663" Jan 29 16:59:48 crc kubenswrapper[4895]: E0129 16:59:48.040694 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 16:59:48 crc kubenswrapper[4895]: E0129 16:59:48.170919 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:59:48 crc kubenswrapper[4895]: E0129 16:59:48.171507 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7xwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8s2lq_openshift-marketplace(14f81c9c-0e13-446b-a525-370c39259440): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:59:48 crc kubenswrapper[4895]: E0129 16:59:48.172759 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 16:59:50 crc kubenswrapper[4895]: E0129 16:59:50.039784 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 16:59:51 crc kubenswrapper[4895]: E0129 16:59:51.041287 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 16:59:57 crc kubenswrapper[4895]: I0129 16:59:57.823312 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:59:57 crc kubenswrapper[4895]: I0129 16:59:57.823684 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.156573 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq"] Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.159280 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.165700 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.166052 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.168250 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq"] Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.233736 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec083dbe-3c00-420b-b71a-d56c57270ab6-secret-volume\") pod \"collect-profiles-29495100-wc5xq\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.233857 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84t94\" (UniqueName: \"kubernetes.io/projected/ec083dbe-3c00-420b-b71a-d56c57270ab6-kube-api-access-84t94\") pod \"collect-profiles-29495100-wc5xq\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.234123 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec083dbe-3c00-420b-b71a-d56c57270ab6-config-volume\") pod \"collect-profiles-29495100-wc5xq\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.337216 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec083dbe-3c00-420b-b71a-d56c57270ab6-secret-volume\") pod \"collect-profiles-29495100-wc5xq\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.337345 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84t94\" (UniqueName: \"kubernetes.io/projected/ec083dbe-3c00-420b-b71a-d56c57270ab6-kube-api-access-84t94\") pod \"collect-profiles-29495100-wc5xq\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.337632 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec083dbe-3c00-420b-b71a-d56c57270ab6-config-volume\") pod \"collect-profiles-29495100-wc5xq\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.339412 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec083dbe-3c00-420b-b71a-d56c57270ab6-config-volume\") pod \"collect-profiles-29495100-wc5xq\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.346260 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec083dbe-3c00-420b-b71a-d56c57270ab6-secret-volume\") pod \"collect-profiles-29495100-wc5xq\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.358607 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84t94\" (UniqueName: \"kubernetes.io/projected/ec083dbe-3c00-420b-b71a-d56c57270ab6-kube-api-access-84t94\") pod \"collect-profiles-29495100-wc5xq\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:00 crc kubenswrapper[4895]: I0129 17:00:00.490163 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:01 crc kubenswrapper[4895]: I0129 17:00:01.001160 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq"] Jan 29 17:00:01 crc kubenswrapper[4895]: E0129 17:00:01.039310 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:00:01 crc kubenswrapper[4895]: I0129 17:00:01.323498 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" event={"ID":"ec083dbe-3c00-420b-b71a-d56c57270ab6","Type":"ContainerStarted","Data":"eb1157a98d0791128a2db743cf39d548a869c5e6977458014a00fad1756cda34"} Jan 29 17:00:03 crc kubenswrapper[4895]: E0129 17:00:03.041233 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:00:05 crc kubenswrapper[4895]: E0129 17:00:05.172909 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:00:05 crc kubenswrapper[4895]: E0129 17:00:05.174272 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8r64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-c9dxd_openshift-marketplace(29e9ec80-fcd0-4eca-8c96-01a531355911): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:00:05 crc kubenswrapper[4895]: E0129 17:00:05.175568 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:00:05 crc kubenswrapper[4895]: I0129 17:00:05.404087 4895 generic.go:334] "Generic (PLEG): container finished" podID="ec083dbe-3c00-420b-b71a-d56c57270ab6" containerID="38b0ecb61c46b95a6f2a8ed5b66ca13e2b6528e7c4e5cb9b1abb4836d9ad0101" exitCode=0 Jan 29 17:00:05 crc kubenswrapper[4895]: I0129 17:00:05.404167 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" event={"ID":"ec083dbe-3c00-420b-b71a-d56c57270ab6","Type":"ContainerDied","Data":"38b0ecb61c46b95a6f2a8ed5b66ca13e2b6528e7c4e5cb9b1abb4836d9ad0101"} Jan 29 17:00:06 crc kubenswrapper[4895]: E0129 17:00:06.186901 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:00:06 crc kubenswrapper[4895]: E0129 17:00:06.187084 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szcmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lrx8j_openshift-marketplace(5b85df56-a33e-4596-bddf-1a7da0dece65): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:00:06 crc kubenswrapper[4895]: E0129 17:00:06.188380 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 17:00:06 crc kubenswrapper[4895]: I0129 17:00:06.769561 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:06 crc kubenswrapper[4895]: I0129 17:00:06.780393 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84t94\" (UniqueName: \"kubernetes.io/projected/ec083dbe-3c00-420b-b71a-d56c57270ab6-kube-api-access-84t94\") pod \"ec083dbe-3c00-420b-b71a-d56c57270ab6\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " Jan 29 17:00:06 crc kubenswrapper[4895]: I0129 17:00:06.780904 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec083dbe-3c00-420b-b71a-d56c57270ab6-secret-volume\") pod \"ec083dbe-3c00-420b-b71a-d56c57270ab6\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " Jan 29 17:00:06 crc kubenswrapper[4895]: I0129 17:00:06.781043 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec083dbe-3c00-420b-b71a-d56c57270ab6-config-volume\") pod \"ec083dbe-3c00-420b-b71a-d56c57270ab6\" (UID: \"ec083dbe-3c00-420b-b71a-d56c57270ab6\") " Jan 29 17:00:06 crc kubenswrapper[4895]: I0129 17:00:06.782196 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec083dbe-3c00-420b-b71a-d56c57270ab6-config-volume" (OuterVolumeSpecName: "config-volume") pod "ec083dbe-3c00-420b-b71a-d56c57270ab6" (UID: "ec083dbe-3c00-420b-b71a-d56c57270ab6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4895]: I0129 17:00:06.788776 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec083dbe-3c00-420b-b71a-d56c57270ab6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ec083dbe-3c00-420b-b71a-d56c57270ab6" (UID: "ec083dbe-3c00-420b-b71a-d56c57270ab6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4895]: I0129 17:00:06.788974 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec083dbe-3c00-420b-b71a-d56c57270ab6-kube-api-access-84t94" (OuterVolumeSpecName: "kube-api-access-84t94") pod "ec083dbe-3c00-420b-b71a-d56c57270ab6" (UID: "ec083dbe-3c00-420b-b71a-d56c57270ab6"). InnerVolumeSpecName "kube-api-access-84t94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4895]: I0129 17:00:06.884180 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec083dbe-3c00-420b-b71a-d56c57270ab6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4895]: I0129 17:00:06.884222 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec083dbe-3c00-420b-b71a-d56c57270ab6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4895]: I0129 17:00:06.884234 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84t94\" (UniqueName: \"kubernetes.io/projected/ec083dbe-3c00-420b-b71a-d56c57270ab6-kube-api-access-84t94\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4895]: I0129 17:00:07.424683 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" event={"ID":"ec083dbe-3c00-420b-b71a-d56c57270ab6","Type":"ContainerDied","Data":"eb1157a98d0791128a2db743cf39d548a869c5e6977458014a00fad1756cda34"} Jan 29 17:00:07 crc kubenswrapper[4895]: I0129 17:00:07.424965 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb1157a98d0791128a2db743cf39d548a869c5e6977458014a00fad1756cda34" Jan 29 17:00:07 crc kubenswrapper[4895]: I0129 17:00:07.424744 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq" Jan 29 17:00:07 crc kubenswrapper[4895]: I0129 17:00:07.860724 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l"] Jan 29 17:00:07 crc kubenswrapper[4895]: I0129 17:00:07.868861 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-fhr8l"] Jan 29 17:00:09 crc kubenswrapper[4895]: I0129 17:00:09.050450 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c306c8c0-a6f0-4811-9688-b811e9495c76" path="/var/lib/kubelet/pods/c306c8c0-a6f0-4811-9688-b811e9495c76/volumes" Jan 29 17:00:12 crc kubenswrapper[4895]: E0129 17:00:12.040611 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:00:14 crc kubenswrapper[4895]: E0129 17:00:14.040975 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:00:18 crc kubenswrapper[4895]: E0129 17:00:18.039525 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 17:00:20 crc kubenswrapper[4895]: E0129 17:00:20.040234 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:00:23 crc kubenswrapper[4895]: E0129 17:00:23.040701 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:00:27 crc kubenswrapper[4895]: E0129 17:00:27.194707 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:00:27 crc kubenswrapper[4895]: E0129 17:00:27.195627 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj5zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2zbgz_openshift-marketplace(6337deb0-d51e-4fa1-8aab-24cebc2988c2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:00:27 crc kubenswrapper[4895]: E0129 17:00:27.196983 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:00:27 crc kubenswrapper[4895]: I0129 17:00:27.823781 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:00:27 crc kubenswrapper[4895]: I0129 17:00:27.823858 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:00:27 crc kubenswrapper[4895]: I0129 17:00:27.823934 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 17:00:27 crc kubenswrapper[4895]: I0129 17:00:27.824941 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8076c5b4874e3a644860820766402316462025e7cb5b586483b35291947d5378"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:00:27 crc kubenswrapper[4895]: I0129 17:00:27.825055 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://8076c5b4874e3a644860820766402316462025e7cb5b586483b35291947d5378" gracePeriod=600 Jan 29 17:00:28 crc kubenswrapper[4895]: I0129 17:00:28.623652 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="8076c5b4874e3a644860820766402316462025e7cb5b586483b35291947d5378" exitCode=0 Jan 29 17:00:28 crc kubenswrapper[4895]: I0129 17:00:28.623720 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"8076c5b4874e3a644860820766402316462025e7cb5b586483b35291947d5378"} Jan 29 17:00:28 crc kubenswrapper[4895]: I0129 17:00:28.624547 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba"} Jan 29 17:00:28 crc kubenswrapper[4895]: I0129 17:00:28.624585 4895 scope.go:117] "RemoveContainer" containerID="484fa50c107a75f46a060617c543013e3cd0ae079e408866b619c81088c69617" Jan 29 17:00:31 crc kubenswrapper[4895]: E0129 17:00:31.041826 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 17:00:35 crc kubenswrapper[4895]: E0129 17:00:35.042475 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:00:37 crc kubenswrapper[4895]: E0129 17:00:37.170629 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:00:37 crc kubenswrapper[4895]: E0129 17:00:37.170814 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7xwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8s2lq_openshift-marketplace(14f81c9c-0e13-446b-a525-370c39259440): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:00:37 crc kubenswrapper[4895]: E0129 17:00:37.172989 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:00:39 crc kubenswrapper[4895]: E0129 17:00:39.040961 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:00:41 crc kubenswrapper[4895]: I0129 17:00:41.769509 4895 generic.go:334] "Generic (PLEG): container finished" podID="cd5e91b8-4378-437f-be31-9a86a5fc2ef7" containerID="a96110f988d31516d4de1b3d9455f1ea095aecd7a0c18f068606d2293c0eb502" exitCode=0 Jan 29 17:00:41 crc kubenswrapper[4895]: I0129 17:00:41.769560 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" event={"ID":"cd5e91b8-4378-437f-be31-9a86a5fc2ef7","Type":"ContainerDied","Data":"a96110f988d31516d4de1b3d9455f1ea095aecd7a0c18f068606d2293c0eb502"} Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.166043 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.328588 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ssh-key-openstack-edpm-ipam\") pod \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.329085 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ceph\") pod \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.329202 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-nova-metadata-neutron-config-0\") pod \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.329346 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnmsv\" (UniqueName: \"kubernetes.io/projected/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-kube-api-access-vnmsv\") pod \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.329433 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-metadata-combined-ca-bundle\") pod \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.329507 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-inventory\") pod \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.329538 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\" (UID: \"cd5e91b8-4378-437f-be31-9a86a5fc2ef7\") " Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.337686 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-kube-api-access-vnmsv" (OuterVolumeSpecName: "kube-api-access-vnmsv") pod "cd5e91b8-4378-437f-be31-9a86a5fc2ef7" (UID: "cd5e91b8-4378-437f-be31-9a86a5fc2ef7"). InnerVolumeSpecName "kube-api-access-vnmsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.339170 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ceph" (OuterVolumeSpecName: "ceph") pod "cd5e91b8-4378-437f-be31-9a86a5fc2ef7" (UID: "cd5e91b8-4378-437f-be31-9a86a5fc2ef7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.340028 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cd5e91b8-4378-437f-be31-9a86a5fc2ef7" (UID: "cd5e91b8-4378-437f-be31-9a86a5fc2ef7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.362367 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-inventory" (OuterVolumeSpecName: "inventory") pod "cd5e91b8-4378-437f-be31-9a86a5fc2ef7" (UID: "cd5e91b8-4378-437f-be31-9a86a5fc2ef7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.372344 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "cd5e91b8-4378-437f-be31-9a86a5fc2ef7" (UID: "cd5e91b8-4378-437f-be31-9a86a5fc2ef7"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.373157 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cd5e91b8-4378-437f-be31-9a86a5fc2ef7" (UID: "cd5e91b8-4378-437f-be31-9a86a5fc2ef7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.374995 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "cd5e91b8-4378-437f-be31-9a86a5fc2ef7" (UID: "cd5e91b8-4378-437f-be31-9a86a5fc2ef7"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.432011 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnmsv\" (UniqueName: \"kubernetes.io/projected/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-kube-api-access-vnmsv\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.432051 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.432063 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.432074 4895 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.432086 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.432174 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.432186 4895 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cd5e91b8-4378-437f-be31-9a86a5fc2ef7-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.788545 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" event={"ID":"cd5e91b8-4378-437f-be31-9a86a5fc2ef7","Type":"ContainerDied","Data":"0bd032b02bd0090dedc6afb733b0185773dd3f0fede3ec17dca7a897a1ae6897"} Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.788597 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bd032b02bd0090dedc6afb733b0185773dd3f0fede3ec17dca7a897a1ae6897" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.788622 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.895862 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5"] Jan 29 17:00:43 crc kubenswrapper[4895]: E0129 17:00:43.897700 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec083dbe-3c00-420b-b71a-d56c57270ab6" containerName="collect-profiles" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.897840 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec083dbe-3c00-420b-b71a-d56c57270ab6" containerName="collect-profiles" Jan 29 17:00:43 crc kubenswrapper[4895]: E0129 17:00:43.897980 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5e91b8-4378-437f-be31-9a86a5fc2ef7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.898104 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5e91b8-4378-437f-be31-9a86a5fc2ef7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.898544 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec083dbe-3c00-420b-b71a-d56c57270ab6" containerName="collect-profiles" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.898646 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5e91b8-4378-437f-be31-9a86a5fc2ef7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.899519 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.902581 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.902734 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.902784 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.902744 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.903039 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.903090 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 17:00:43 crc kubenswrapper[4895]: I0129 17:00:43.908271 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5"] Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.043612 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.044102 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.044177 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.044418 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv9hw\" (UniqueName: \"kubernetes.io/projected/4729dc58-3e8c-421b-82f8-45a513c3559d-kube-api-access-dv9hw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.044533 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.044622 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.147145 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.147308 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.147337 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.147393 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.147490 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv9hw\" (UniqueName: \"kubernetes.io/projected/4729dc58-3e8c-421b-82f8-45a513c3559d-kube-api-access-dv9hw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.147585 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.154907 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.155639 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.155756 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.156267 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.158097 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.175307 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv9hw\" (UniqueName: \"kubernetes.io/projected/4729dc58-3e8c-421b-82f8-45a513c3559d-kube-api-access-dv9hw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.220963 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.779235 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5"] Jan 29 17:00:44 crc kubenswrapper[4895]: W0129 17:00:44.790188 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4729dc58_3e8c_421b_82f8_45a513c3559d.slice/crio-fc08426c44d1471edb2e569ba6d9241dc1a7dff4a2bac8c7cc32a3d845780c95 WatchSource:0}: Error finding container fc08426c44d1471edb2e569ba6d9241dc1a7dff4a2bac8c7cc32a3d845780c95: Status 404 returned error can't find the container with id fc08426c44d1471edb2e569ba6d9241dc1a7dff4a2bac8c7cc32a3d845780c95 Jan 29 17:00:44 crc kubenswrapper[4895]: I0129 17:00:44.801458 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" event={"ID":"4729dc58-3e8c-421b-82f8-45a513c3559d","Type":"ContainerStarted","Data":"fc08426c44d1471edb2e569ba6d9241dc1a7dff4a2bac8c7cc32a3d845780c95"} Jan 29 17:00:45 crc kubenswrapper[4895]: I0129 17:00:45.813747 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" event={"ID":"4729dc58-3e8c-421b-82f8-45a513c3559d","Type":"ContainerStarted","Data":"2202688ce34a9eaf657cfb6510c625c21d9a03d0d060afaa8d8bad9eaddaaa33"} Jan 29 17:00:45 crc kubenswrapper[4895]: I0129 17:00:45.841710 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" podStartSLOduration=2.412754346 podStartE2EDuration="2.84168081s" podCreationTimestamp="2026-01-29 17:00:43 +0000 UTC" firstStartedPulling="2026-01-29 17:00:44.792644926 +0000 UTC m=+2928.595622190" lastFinishedPulling="2026-01-29 17:00:45.22157137 +0000 UTC m=+2929.024548654" observedRunningTime="2026-01-29 17:00:45.834274049 +0000 UTC m=+2929.637251323" watchObservedRunningTime="2026-01-29 17:00:45.84168081 +0000 UTC m=+2929.644658074" Jan 29 17:00:46 crc kubenswrapper[4895]: E0129 17:00:46.039614 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 17:00:47 crc kubenswrapper[4895]: E0129 17:00:47.044756 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:00:51 crc kubenswrapper[4895]: E0129 17:00:51.040025 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:00:53 crc kubenswrapper[4895]: E0129 17:00:53.039351 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:00:59 crc kubenswrapper[4895]: E0129 17:00:59.039084 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 17:00:59 crc kubenswrapper[4895]: E0129 17:00:59.039115 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.139694 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29495101-hnvf7"] Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.142260 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.151182 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495101-hnvf7"] Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.271154 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-combined-ca-bundle\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.271279 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-fernet-keys\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.271789 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-config-data\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.271930 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc785\" (UniqueName: \"kubernetes.io/projected/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-kube-api-access-dc785\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.374160 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-config-data\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.374224 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc785\" (UniqueName: \"kubernetes.io/projected/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-kube-api-access-dc785\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.374348 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-combined-ca-bundle\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.374414 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-fernet-keys\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.383310 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-combined-ca-bundle\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.383383 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-fernet-keys\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.384916 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-config-data\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.394372 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc785\" (UniqueName: \"kubernetes.io/projected/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-kube-api-access-dc785\") pod \"keystone-cron-29495101-hnvf7\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.475065 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.924731 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495101-hnvf7"] Jan 29 17:01:00 crc kubenswrapper[4895]: I0129 17:01:00.964463 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495101-hnvf7" event={"ID":"ae33dd87-375b-4069-ae8c-5135ec7f8fe9","Type":"ContainerStarted","Data":"f52dd81e650aef12453afdaa75d425b1795427212434960e842e525169773ae0"} Jan 29 17:01:01 crc kubenswrapper[4895]: I0129 17:01:01.975308 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495101-hnvf7" event={"ID":"ae33dd87-375b-4069-ae8c-5135ec7f8fe9","Type":"ContainerStarted","Data":"0cd3b0c2befc6a2adb6bb9d6d447eeda50328b90f667b957cd81aeae287a8be2"} Jan 29 17:01:02 crc kubenswrapper[4895]: I0129 17:01:02.950940 4895 scope.go:117] "RemoveContainer" containerID="2f0c9de26cd5d9d5b3cc67bebf2e60370b9376d3d4307b54c46ff8e42fd415cd" Jan 29 17:01:03 crc kubenswrapper[4895]: I0129 17:01:03.997652 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae33dd87-375b-4069-ae8c-5135ec7f8fe9" containerID="0cd3b0c2befc6a2adb6bb9d6d447eeda50328b90f667b957cd81aeae287a8be2" exitCode=0 Jan 29 17:01:03 crc kubenswrapper[4895]: I0129 17:01:03.997716 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495101-hnvf7" event={"ID":"ae33dd87-375b-4069-ae8c-5135ec7f8fe9","Type":"ContainerDied","Data":"0cd3b0c2befc6a2adb6bb9d6d447eeda50328b90f667b957cd81aeae287a8be2"} Jan 29 17:01:04 crc kubenswrapper[4895]: E0129 17:01:04.040427 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:01:05 crc kubenswrapper[4895]: E0129 17:01:05.040293 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.365700 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.491646 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-config-data\") pod \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.491966 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc785\" (UniqueName: \"kubernetes.io/projected/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-kube-api-access-dc785\") pod \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.492084 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-combined-ca-bundle\") pod \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.492166 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-fernet-keys\") pod \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\" (UID: \"ae33dd87-375b-4069-ae8c-5135ec7f8fe9\") " Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.500668 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-kube-api-access-dc785" (OuterVolumeSpecName: "kube-api-access-dc785") pod "ae33dd87-375b-4069-ae8c-5135ec7f8fe9" (UID: "ae33dd87-375b-4069-ae8c-5135ec7f8fe9"). InnerVolumeSpecName "kube-api-access-dc785". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.503146 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ae33dd87-375b-4069-ae8c-5135ec7f8fe9" (UID: "ae33dd87-375b-4069-ae8c-5135ec7f8fe9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.531233 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae33dd87-375b-4069-ae8c-5135ec7f8fe9" (UID: "ae33dd87-375b-4069-ae8c-5135ec7f8fe9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.551933 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-config-data" (OuterVolumeSpecName: "config-data") pod "ae33dd87-375b-4069-ae8c-5135ec7f8fe9" (UID: "ae33dd87-375b-4069-ae8c-5135ec7f8fe9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.594298 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.594333 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc785\" (UniqueName: \"kubernetes.io/projected/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-kube-api-access-dc785\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.594345 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:05 crc kubenswrapper[4895]: I0129 17:01:05.594355 4895 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ae33dd87-375b-4069-ae8c-5135ec7f8fe9-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:06 crc kubenswrapper[4895]: I0129 17:01:06.017921 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495101-hnvf7" event={"ID":"ae33dd87-375b-4069-ae8c-5135ec7f8fe9","Type":"ContainerDied","Data":"f52dd81e650aef12453afdaa75d425b1795427212434960e842e525169773ae0"} Jan 29 17:01:06 crc kubenswrapper[4895]: I0129 17:01:06.018428 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495101-hnvf7" Jan 29 17:01:06 crc kubenswrapper[4895]: I0129 17:01:06.018555 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f52dd81e650aef12453afdaa75d425b1795427212434960e842e525169773ae0" Jan 29 17:01:12 crc kubenswrapper[4895]: E0129 17:01:12.039251 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 17:01:13 crc kubenswrapper[4895]: E0129 17:01:13.039988 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:01:16 crc kubenswrapper[4895]: E0129 17:01:16.038476 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:01:18 crc kubenswrapper[4895]: E0129 17:01:18.039824 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:01:23 crc kubenswrapper[4895]: E0129 17:01:23.040361 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" Jan 29 17:01:28 crc kubenswrapper[4895]: E0129 17:01:28.169932 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:01:28 crc kubenswrapper[4895]: E0129 17:01:28.170847 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8r64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-c9dxd_openshift-marketplace(29e9ec80-fcd0-4eca-8c96-01a531355911): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:01:28 crc kubenswrapper[4895]: E0129 17:01:28.172124 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:01:29 crc kubenswrapper[4895]: E0129 17:01:29.038220 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:01:32 crc kubenswrapper[4895]: E0129 17:01:32.038583 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:01:39 crc kubenswrapper[4895]: I0129 17:01:39.344854 4895 generic.go:334] "Generic (PLEG): container finished" podID="5b85df56-a33e-4596-bddf-1a7da0dece65" containerID="dd43badb8c082467f8f307c143efd347a0aedaac5a821607081171afc82023c3" exitCode=0 Jan 29 17:01:39 crc kubenswrapper[4895]: I0129 17:01:39.344936 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrx8j" event={"ID":"5b85df56-a33e-4596-bddf-1a7da0dece65","Type":"ContainerDied","Data":"dd43badb8c082467f8f307c143efd347a0aedaac5a821607081171afc82023c3"} Jan 29 17:01:40 crc kubenswrapper[4895]: I0129 17:01:40.356054 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrx8j" event={"ID":"5b85df56-a33e-4596-bddf-1a7da0dece65","Type":"ContainerStarted","Data":"e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54"} Jan 29 17:01:40 crc kubenswrapper[4895]: I0129 17:01:40.382064 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lrx8j" podStartSLOduration=1.990098747 podStartE2EDuration="3m13.382039604s" podCreationTimestamp="2026-01-29 16:58:27 +0000 UTC" firstStartedPulling="2026-01-29 16:58:28.360745806 +0000 UTC m=+2792.163723080" lastFinishedPulling="2026-01-29 17:01:39.752686673 +0000 UTC m=+2983.555663937" observedRunningTime="2026-01-29 17:01:40.374197081 +0000 UTC m=+2984.177174355" watchObservedRunningTime="2026-01-29 17:01:40.382039604 +0000 UTC m=+2984.185016868" Jan 29 17:01:41 crc kubenswrapper[4895]: E0129 17:01:41.039741 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:01:41 crc kubenswrapper[4895]: E0129 17:01:41.039834 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:01:43 crc kubenswrapper[4895]: E0129 17:01:43.039383 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:01:47 crc kubenswrapper[4895]: I0129 17:01:47.442357 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 17:01:47 crc kubenswrapper[4895]: I0129 17:01:47.443118 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 17:01:47 crc kubenswrapper[4895]: I0129 17:01:47.491951 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 17:01:48 crc kubenswrapper[4895]: I0129 17:01:48.496551 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 17:01:48 crc kubenswrapper[4895]: I0129 17:01:48.545385 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrx8j"] Jan 29 17:01:50 crc kubenswrapper[4895]: I0129 17:01:50.448012 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lrx8j" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" containerName="registry-server" containerID="cri-o://e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54" gracePeriod=2 Jan 29 17:01:50 crc kubenswrapper[4895]: I0129 17:01:50.896934 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 17:01:50 crc kubenswrapper[4895]: I0129 17:01:50.972555 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-utilities\") pod \"5b85df56-a33e-4596-bddf-1a7da0dece65\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " Jan 29 17:01:50 crc kubenswrapper[4895]: I0129 17:01:50.972716 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szcmv\" (UniqueName: \"kubernetes.io/projected/5b85df56-a33e-4596-bddf-1a7da0dece65-kube-api-access-szcmv\") pod \"5b85df56-a33e-4596-bddf-1a7da0dece65\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " Jan 29 17:01:50 crc kubenswrapper[4895]: I0129 17:01:50.972759 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-catalog-content\") pod \"5b85df56-a33e-4596-bddf-1a7da0dece65\" (UID: \"5b85df56-a33e-4596-bddf-1a7da0dece65\") " Jan 29 17:01:50 crc kubenswrapper[4895]: I0129 17:01:50.973704 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-utilities" (OuterVolumeSpecName: "utilities") pod "5b85df56-a33e-4596-bddf-1a7da0dece65" (UID: "5b85df56-a33e-4596-bddf-1a7da0dece65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:01:50 crc kubenswrapper[4895]: I0129 17:01:50.981672 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b85df56-a33e-4596-bddf-1a7da0dece65-kube-api-access-szcmv" (OuterVolumeSpecName: "kube-api-access-szcmv") pod "5b85df56-a33e-4596-bddf-1a7da0dece65" (UID: "5b85df56-a33e-4596-bddf-1a7da0dece65"). InnerVolumeSpecName "kube-api-access-szcmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:01:50 crc kubenswrapper[4895]: I0129 17:01:50.997728 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b85df56-a33e-4596-bddf-1a7da0dece65" (UID: "5b85df56-a33e-4596-bddf-1a7da0dece65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.074959 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szcmv\" (UniqueName: \"kubernetes.io/projected/5b85df56-a33e-4596-bddf-1a7da0dece65-kube-api-access-szcmv\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.075006 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.075018 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b85df56-a33e-4596-bddf-1a7da0dece65-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.458201 4895 generic.go:334] "Generic (PLEG): container finished" podID="5b85df56-a33e-4596-bddf-1a7da0dece65" containerID="e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54" exitCode=0 Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.458416 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lrx8j" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.458405 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrx8j" event={"ID":"5b85df56-a33e-4596-bddf-1a7da0dece65","Type":"ContainerDied","Data":"e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54"} Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.459802 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lrx8j" event={"ID":"5b85df56-a33e-4596-bddf-1a7da0dece65","Type":"ContainerDied","Data":"e780fc7dfd8d793a84ae27b296c296f2ea98c3247425e65ae2096251056c67a5"} Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.459833 4895 scope.go:117] "RemoveContainer" containerID="e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.482177 4895 scope.go:117] "RemoveContainer" containerID="dd43badb8c082467f8f307c143efd347a0aedaac5a821607081171afc82023c3" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.500118 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrx8j"] Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.509843 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lrx8j"] Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.529900 4895 scope.go:117] "RemoveContainer" containerID="d3b9d45e362c790a67275d038f63381c6f86d2880eda0d65e3cfe6bcc8fc5966" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.565063 4895 scope.go:117] "RemoveContainer" containerID="e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54" Jan 29 17:01:51 crc kubenswrapper[4895]: E0129 17:01:51.565776 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54\": container with ID starting with e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54 not found: ID does not exist" containerID="e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.565826 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54"} err="failed to get container status \"e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54\": rpc error: code = NotFound desc = could not find container \"e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54\": container with ID starting with e8a5c991332c3e927a49fb540aab708d526f2dbcb6e527e15b48b35a44bb5c54 not found: ID does not exist" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.565855 4895 scope.go:117] "RemoveContainer" containerID="dd43badb8c082467f8f307c143efd347a0aedaac5a821607081171afc82023c3" Jan 29 17:01:51 crc kubenswrapper[4895]: E0129 17:01:51.566507 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd43badb8c082467f8f307c143efd347a0aedaac5a821607081171afc82023c3\": container with ID starting with dd43badb8c082467f8f307c143efd347a0aedaac5a821607081171afc82023c3 not found: ID does not exist" containerID="dd43badb8c082467f8f307c143efd347a0aedaac5a821607081171afc82023c3" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.566567 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd43badb8c082467f8f307c143efd347a0aedaac5a821607081171afc82023c3"} err="failed to get container status \"dd43badb8c082467f8f307c143efd347a0aedaac5a821607081171afc82023c3\": rpc error: code = NotFound desc = could not find container \"dd43badb8c082467f8f307c143efd347a0aedaac5a821607081171afc82023c3\": container with ID starting with dd43badb8c082467f8f307c143efd347a0aedaac5a821607081171afc82023c3 not found: ID does not exist" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.566603 4895 scope.go:117] "RemoveContainer" containerID="d3b9d45e362c790a67275d038f63381c6f86d2880eda0d65e3cfe6bcc8fc5966" Jan 29 17:01:51 crc kubenswrapper[4895]: E0129 17:01:51.567372 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3b9d45e362c790a67275d038f63381c6f86d2880eda0d65e3cfe6bcc8fc5966\": container with ID starting with d3b9d45e362c790a67275d038f63381c6f86d2880eda0d65e3cfe6bcc8fc5966 not found: ID does not exist" containerID="d3b9d45e362c790a67275d038f63381c6f86d2880eda0d65e3cfe6bcc8fc5966" Jan 29 17:01:51 crc kubenswrapper[4895]: I0129 17:01:51.567457 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3b9d45e362c790a67275d038f63381c6f86d2880eda0d65e3cfe6bcc8fc5966"} err="failed to get container status \"d3b9d45e362c790a67275d038f63381c6f86d2880eda0d65e3cfe6bcc8fc5966\": rpc error: code = NotFound desc = could not find container \"d3b9d45e362c790a67275d038f63381c6f86d2880eda0d65e3cfe6bcc8fc5966\": container with ID starting with d3b9d45e362c790a67275d038f63381c6f86d2880eda0d65e3cfe6bcc8fc5966 not found: ID does not exist" Jan 29 17:01:53 crc kubenswrapper[4895]: I0129 17:01:53.047173 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" path="/var/lib/kubelet/pods/5b85df56-a33e-4596-bddf-1a7da0dece65/volumes" Jan 29 17:01:54 crc kubenswrapper[4895]: E0129 17:01:54.039256 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:01:54 crc kubenswrapper[4895]: E0129 17:01:54.166313 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:01:54 crc kubenswrapper[4895]: E0129 17:01:54.166609 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj5zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2zbgz_openshift-marketplace(6337deb0-d51e-4fa1-8aab-24cebc2988c2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:01:54 crc kubenswrapper[4895]: E0129 17:01:54.168083 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:01:56 crc kubenswrapper[4895]: E0129 17:01:56.040537 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:02:06 crc kubenswrapper[4895]: E0129 17:02:06.173282 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:02:06 crc kubenswrapper[4895]: E0129 17:02:06.174048 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7xwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8s2lq_openshift-marketplace(14f81c9c-0e13-446b-a525-370c39259440): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:02:06 crc kubenswrapper[4895]: E0129 17:02:06.175269 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:02:07 crc kubenswrapper[4895]: E0129 17:02:07.047027 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:02:10 crc kubenswrapper[4895]: E0129 17:02:10.038706 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:02:17 crc kubenswrapper[4895]: E0129 17:02:17.044926 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:02:21 crc kubenswrapper[4895]: E0129 17:02:21.039953 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:02:25 crc kubenswrapper[4895]: E0129 17:02:25.039813 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:02:32 crc kubenswrapper[4895]: E0129 17:02:32.038515 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:02:36 crc kubenswrapper[4895]: E0129 17:02:36.041059 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:02:36 crc kubenswrapper[4895]: E0129 17:02:36.041222 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:02:44 crc kubenswrapper[4895]: E0129 17:02:44.039678 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:02:48 crc kubenswrapper[4895]: E0129 17:02:48.039808 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:02:48 crc kubenswrapper[4895]: E0129 17:02:48.040023 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:02:56 crc kubenswrapper[4895]: E0129 17:02:56.039487 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:02:57 crc kubenswrapper[4895]: I0129 17:02:57.823107 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:02:57 crc kubenswrapper[4895]: I0129 17:02:57.823521 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:03:02 crc kubenswrapper[4895]: E0129 17:03:02.039624 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:03:02 crc kubenswrapper[4895]: E0129 17:03:02.039838 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:03:07 crc kubenswrapper[4895]: E0129 17:03:07.046218 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:03:15 crc kubenswrapper[4895]: E0129 17:03:15.040569 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:03:15 crc kubenswrapper[4895]: E0129 17:03:15.041816 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:03:19 crc kubenswrapper[4895]: E0129 17:03:19.040543 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:03:27 crc kubenswrapper[4895]: E0129 17:03:27.050152 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:03:27 crc kubenswrapper[4895]: E0129 17:03:27.050422 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:03:27 crc kubenswrapper[4895]: I0129 17:03:27.823281 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:03:27 crc kubenswrapper[4895]: I0129 17:03:27.823726 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:03:33 crc kubenswrapper[4895]: E0129 17:03:33.040471 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:03:38 crc kubenswrapper[4895]: E0129 17:03:38.038937 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:03:40 crc kubenswrapper[4895]: E0129 17:03:40.040847 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:03:46 crc kubenswrapper[4895]: E0129 17:03:46.040627 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:03:51 crc kubenswrapper[4895]: E0129 17:03:51.040502 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:03:53 crc kubenswrapper[4895]: E0129 17:03:53.039432 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:03:57 crc kubenswrapper[4895]: I0129 17:03:57.823280 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:03:57 crc kubenswrapper[4895]: I0129 17:03:57.824483 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:03:57 crc kubenswrapper[4895]: I0129 17:03:57.824571 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 17:03:57 crc kubenswrapper[4895]: I0129 17:03:57.826013 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:03:57 crc kubenswrapper[4895]: I0129 17:03:57.826152 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" gracePeriod=600 Jan 29 17:03:57 crc kubenswrapper[4895]: E0129 17:03:57.947205 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:03:58 crc kubenswrapper[4895]: I0129 17:03:58.598702 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" exitCode=0 Jan 29 17:03:58 crc kubenswrapper[4895]: I0129 17:03:58.598777 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba"} Jan 29 17:03:58 crc kubenswrapper[4895]: I0129 17:03:58.598846 4895 scope.go:117] "RemoveContainer" containerID="8076c5b4874e3a644860820766402316462025e7cb5b586483b35291947d5378" Jan 29 17:03:58 crc kubenswrapper[4895]: I0129 17:03:58.600147 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:03:58 crc kubenswrapper[4895]: E0129 17:03:58.600462 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:04:01 crc kubenswrapper[4895]: E0129 17:04:01.039635 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:04:04 crc kubenswrapper[4895]: E0129 17:04:04.040048 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:04:08 crc kubenswrapper[4895]: E0129 17:04:08.046211 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:04:09 crc kubenswrapper[4895]: I0129 17:04:09.038598 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:04:09 crc kubenswrapper[4895]: E0129 17:04:09.039343 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:04:13 crc kubenswrapper[4895]: E0129 17:04:13.041358 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:04:18 crc kubenswrapper[4895]: E0129 17:04:18.039909 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:04:22 crc kubenswrapper[4895]: I0129 17:04:22.037329 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:04:22 crc kubenswrapper[4895]: E0129 17:04:22.039242 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:04:23 crc kubenswrapper[4895]: I0129 17:04:23.043832 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:04:23 crc kubenswrapper[4895]: E0129 17:04:23.180836 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:04:23 crc kubenswrapper[4895]: E0129 17:04:23.181128 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8r64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-c9dxd_openshift-marketplace(29e9ec80-fcd0-4eca-8c96-01a531355911): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:04:23 crc kubenswrapper[4895]: E0129 17:04:23.182387 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:04:25 crc kubenswrapper[4895]: E0129 17:04:25.040714 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:04:33 crc kubenswrapper[4895]: E0129 17:04:33.040032 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:04:35 crc kubenswrapper[4895]: I0129 17:04:35.037975 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:04:35 crc kubenswrapper[4895]: E0129 17:04:35.039152 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:04:35 crc kubenswrapper[4895]: E0129 17:04:35.039664 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:04:37 crc kubenswrapper[4895]: E0129 17:04:37.045896 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:04:44 crc kubenswrapper[4895]: E0129 17:04:44.167113 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:04:44 crc kubenswrapper[4895]: E0129 17:04:44.168001 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj5zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2zbgz_openshift-marketplace(6337deb0-d51e-4fa1-8aab-24cebc2988c2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:04:44 crc kubenswrapper[4895]: E0129 17:04:44.169210 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:04:47 crc kubenswrapper[4895]: I0129 17:04:47.031726 4895 generic.go:334] "Generic (PLEG): container finished" podID="4729dc58-3e8c-421b-82f8-45a513c3559d" containerID="2202688ce34a9eaf657cfb6510c625c21d9a03d0d060afaa8d8bad9eaddaaa33" exitCode=0 Jan 29 17:04:47 crc kubenswrapper[4895]: I0129 17:04:47.031809 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" event={"ID":"4729dc58-3e8c-421b-82f8-45a513c3559d","Type":"ContainerDied","Data":"2202688ce34a9eaf657cfb6510c625c21d9a03d0d060afaa8d8bad9eaddaaa33"} Jan 29 17:04:48 crc kubenswrapper[4895]: E0129 17:04:48.039557 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.426829 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.435957 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-secret-0\") pod \"4729dc58-3e8c-421b-82f8-45a513c3559d\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.436048 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv9hw\" (UniqueName: \"kubernetes.io/projected/4729dc58-3e8c-421b-82f8-45a513c3559d-kube-api-access-dv9hw\") pod \"4729dc58-3e8c-421b-82f8-45a513c3559d\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.436226 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-combined-ca-bundle\") pod \"4729dc58-3e8c-421b-82f8-45a513c3559d\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.436256 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ssh-key-openstack-edpm-ipam\") pod \"4729dc58-3e8c-421b-82f8-45a513c3559d\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.436333 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ceph\") pod \"4729dc58-3e8c-421b-82f8-45a513c3559d\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.436406 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-inventory\") pod \"4729dc58-3e8c-421b-82f8-45a513c3559d\" (UID: \"4729dc58-3e8c-421b-82f8-45a513c3559d\") " Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.443541 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ceph" (OuterVolumeSpecName: "ceph") pod "4729dc58-3e8c-421b-82f8-45a513c3559d" (UID: "4729dc58-3e8c-421b-82f8-45a513c3559d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.443839 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4729dc58-3e8c-421b-82f8-45a513c3559d-kube-api-access-dv9hw" (OuterVolumeSpecName: "kube-api-access-dv9hw") pod "4729dc58-3e8c-421b-82f8-45a513c3559d" (UID: "4729dc58-3e8c-421b-82f8-45a513c3559d"). InnerVolumeSpecName "kube-api-access-dv9hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.449231 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4729dc58-3e8c-421b-82f8-45a513c3559d" (UID: "4729dc58-3e8c-421b-82f8-45a513c3559d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.477974 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-inventory" (OuterVolumeSpecName: "inventory") pod "4729dc58-3e8c-421b-82f8-45a513c3559d" (UID: "4729dc58-3e8c-421b-82f8-45a513c3559d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.478094 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4729dc58-3e8c-421b-82f8-45a513c3559d" (UID: "4729dc58-3e8c-421b-82f8-45a513c3559d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.512423 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "4729dc58-3e8c-421b-82f8-45a513c3559d" (UID: "4729dc58-3e8c-421b-82f8-45a513c3559d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.538706 4895 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.538743 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.538756 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.538769 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.538786 4895 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4729dc58-3e8c-421b-82f8-45a513c3559d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:48 crc kubenswrapper[4895]: I0129 17:04:48.538797 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv9hw\" (UniqueName: \"kubernetes.io/projected/4729dc58-3e8c-421b-82f8-45a513c3559d-kube-api-access-dv9hw\") on node \"crc\" DevicePath \"\"" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.053755 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" event={"ID":"4729dc58-3e8c-421b-82f8-45a513c3559d","Type":"ContainerDied","Data":"fc08426c44d1471edb2e569ba6d9241dc1a7dff4a2bac8c7cc32a3d845780c95"} Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.053801 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc08426c44d1471edb2e569ba6d9241dc1a7dff4a2bac8c7cc32a3d845780c95" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.053860 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.140407 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4"] Jan 29 17:04:49 crc kubenswrapper[4895]: E0129 17:04:49.140999 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" containerName="extract-content" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.141076 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" containerName="extract-content" Jan 29 17:04:49 crc kubenswrapper[4895]: E0129 17:04:49.141174 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" containerName="registry-server" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.141234 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" containerName="registry-server" Jan 29 17:04:49 crc kubenswrapper[4895]: E0129 17:04:49.141296 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4729dc58-3e8c-421b-82f8-45a513c3559d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.141351 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4729dc58-3e8c-421b-82f8-45a513c3559d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 17:04:49 crc kubenswrapper[4895]: E0129 17:04:49.141404 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae33dd87-375b-4069-ae8c-5135ec7f8fe9" containerName="keystone-cron" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.141456 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae33dd87-375b-4069-ae8c-5135ec7f8fe9" containerName="keystone-cron" Jan 29 17:04:49 crc kubenswrapper[4895]: E0129 17:04:49.141522 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" containerName="extract-utilities" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.141575 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" containerName="extract-utilities" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.141791 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae33dd87-375b-4069-ae8c-5135ec7f8fe9" containerName="keystone-cron" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.141854 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4729dc58-3e8c-421b-82f8-45a513c3559d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.141944 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b85df56-a33e-4596-bddf-1a7da0dece65" containerName="registry-server" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.142600 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.148012 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.148092 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.148023 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-cm4v7" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.148091 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.148191 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.148097 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.148425 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.148454 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.148902 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.151919 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.151983 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.152013 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.152063 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.152134 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.152175 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr8rl\" (UniqueName: \"kubernetes.io/projected/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-kube-api-access-sr8rl\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.152222 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.152249 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.152310 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.152338 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.152402 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: E0129 17:04:49.161957 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:04:49 crc kubenswrapper[4895]: E0129 17:04:49.162479 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7xwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8s2lq_openshift-marketplace(14f81c9c-0e13-446b-a525-370c39259440): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:04:49 crc kubenswrapper[4895]: E0129 17:04:49.163931 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.164639 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4"] Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.255028 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.255918 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.256138 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.256243 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.256267 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.256305 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.256392 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.256447 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.256522 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.256593 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr8rl\" (UniqueName: \"kubernetes.io/projected/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-kube-api-access-sr8rl\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.256646 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.256671 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.257973 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.261594 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.261632 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.261877 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.262246 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.262325 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.262605 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.263250 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.263982 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.273680 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr8rl\" (UniqueName: \"kubernetes.io/projected/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-kube-api-access-sr8rl\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:49 crc kubenswrapper[4895]: I0129 17:04:49.464480 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:04:50 crc kubenswrapper[4895]: I0129 17:04:50.033400 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4"] Jan 29 17:04:50 crc kubenswrapper[4895]: W0129 17:04:50.036830 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a0a88bf_e09f_4ff3_bcdd_f9ac967335a7.slice/crio-db6674993eae984b0224ae2f303ed7d45e52caf833ea04ce3e17e8c98529d622 WatchSource:0}: Error finding container db6674993eae984b0224ae2f303ed7d45e52caf833ea04ce3e17e8c98529d622: Status 404 returned error can't find the container with id db6674993eae984b0224ae2f303ed7d45e52caf833ea04ce3e17e8c98529d622 Jan 29 17:04:50 crc kubenswrapper[4895]: I0129 17:04:50.037410 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:04:50 crc kubenswrapper[4895]: E0129 17:04:50.037683 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:04:50 crc kubenswrapper[4895]: I0129 17:04:50.064404 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" event={"ID":"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7","Type":"ContainerStarted","Data":"db6674993eae984b0224ae2f303ed7d45e52caf833ea04ce3e17e8c98529d622"} Jan 29 17:04:51 crc kubenswrapper[4895]: I0129 17:04:51.078162 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" event={"ID":"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7","Type":"ContainerStarted","Data":"2970b79c070b4495eaebefeec33cffa154f4f25b9477306cf1fe84028e844443"} Jan 29 17:04:51 crc kubenswrapper[4895]: I0129 17:04:51.116176 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" podStartSLOduration=1.7129434780000001 podStartE2EDuration="2.116152024s" podCreationTimestamp="2026-01-29 17:04:49 +0000 UTC" firstStartedPulling="2026-01-29 17:04:50.040127439 +0000 UTC m=+3173.843104723" lastFinishedPulling="2026-01-29 17:04:50.443335995 +0000 UTC m=+3174.246313269" observedRunningTime="2026-01-29 17:04:51.102207296 +0000 UTC m=+3174.905184600" watchObservedRunningTime="2026-01-29 17:04:51.116152024 +0000 UTC m=+3174.919129308" Jan 29 17:04:56 crc kubenswrapper[4895]: E0129 17:04:56.038525 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:05:00 crc kubenswrapper[4895]: E0129 17:05:00.039157 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:05:01 crc kubenswrapper[4895]: I0129 17:05:01.036760 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:05:01 crc kubenswrapper[4895]: E0129 17:05:01.037372 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:05:03 crc kubenswrapper[4895]: E0129 17:05:03.039656 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:05:08 crc kubenswrapper[4895]: E0129 17:05:08.038972 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:05:11 crc kubenswrapper[4895]: E0129 17:05:11.038940 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:05:16 crc kubenswrapper[4895]: I0129 17:05:16.037304 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:05:16 crc kubenswrapper[4895]: E0129 17:05:16.038342 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:05:17 crc kubenswrapper[4895]: E0129 17:05:17.044482 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:05:19 crc kubenswrapper[4895]: E0129 17:05:19.039495 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:05:24 crc kubenswrapper[4895]: E0129 17:05:24.039861 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:05:27 crc kubenswrapper[4895]: I0129 17:05:27.042049 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:05:27 crc kubenswrapper[4895]: E0129 17:05:27.043111 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:05:28 crc kubenswrapper[4895]: E0129 17:05:28.040259 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:05:34 crc kubenswrapper[4895]: E0129 17:05:34.039295 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:05:37 crc kubenswrapper[4895]: E0129 17:05:37.044450 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:05:39 crc kubenswrapper[4895]: E0129 17:05:39.040267 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:05:41 crc kubenswrapper[4895]: I0129 17:05:41.038141 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:05:41 crc kubenswrapper[4895]: E0129 17:05:41.038373 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:05:48 crc kubenswrapper[4895]: E0129 17:05:48.039785 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:05:50 crc kubenswrapper[4895]: E0129 17:05:50.039015 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:05:50 crc kubenswrapper[4895]: E0129 17:05:50.039022 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:05:56 crc kubenswrapper[4895]: I0129 17:05:56.037492 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:05:56 crc kubenswrapper[4895]: E0129 17:05:56.038311 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:06:00 crc kubenswrapper[4895]: E0129 17:06:00.041461 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:06:03 crc kubenswrapper[4895]: E0129 17:06:03.039092 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:06:05 crc kubenswrapper[4895]: E0129 17:06:05.039784 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:06:07 crc kubenswrapper[4895]: I0129 17:06:07.042486 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:06:07 crc kubenswrapper[4895]: E0129 17:06:07.043209 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:06:11 crc kubenswrapper[4895]: E0129 17:06:11.039051 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:06:14 crc kubenswrapper[4895]: E0129 17:06:14.039297 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:06:18 crc kubenswrapper[4895]: E0129 17:06:18.040163 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:06:22 crc kubenswrapper[4895]: I0129 17:06:22.038154 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:06:22 crc kubenswrapper[4895]: E0129 17:06:22.039128 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:06:25 crc kubenswrapper[4895]: E0129 17:06:25.039749 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:06:26 crc kubenswrapper[4895]: E0129 17:06:26.038141 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:06:33 crc kubenswrapper[4895]: E0129 17:06:33.039296 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:06:36 crc kubenswrapper[4895]: I0129 17:06:36.055643 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:06:36 crc kubenswrapper[4895]: E0129 17:06:36.056595 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:06:37 crc kubenswrapper[4895]: E0129 17:06:37.052172 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:06:39 crc kubenswrapper[4895]: E0129 17:06:39.039200 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:06:47 crc kubenswrapper[4895]: E0129 17:06:47.046844 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:06:49 crc kubenswrapper[4895]: I0129 17:06:49.037153 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:06:49 crc kubenswrapper[4895]: E0129 17:06:49.037809 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:06:50 crc kubenswrapper[4895]: E0129 17:06:50.039636 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:06:53 crc kubenswrapper[4895]: E0129 17:06:53.038722 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:07:00 crc kubenswrapper[4895]: E0129 17:07:00.041123 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:07:01 crc kubenswrapper[4895]: I0129 17:07:01.036483 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:07:01 crc kubenswrapper[4895]: E0129 17:07:01.036835 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:07:01 crc kubenswrapper[4895]: E0129 17:07:01.037721 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:07:08 crc kubenswrapper[4895]: E0129 17:07:08.039225 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:07:13 crc kubenswrapper[4895]: E0129 17:07:13.039509 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:07:14 crc kubenswrapper[4895]: I0129 17:07:14.339386 4895 generic.go:334] "Generic (PLEG): container finished" podID="6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" containerID="2970b79c070b4495eaebefeec33cffa154f4f25b9477306cf1fe84028e844443" exitCode=0 Jan 29 17:07:14 crc kubenswrapper[4895]: I0129 17:07:14.339522 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" event={"ID":"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7","Type":"ContainerDied","Data":"2970b79c070b4495eaebefeec33cffa154f4f25b9477306cf1fe84028e844443"} Jan 29 17:07:15 crc kubenswrapper[4895]: E0129 17:07:15.040314 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.786156 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.900824 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-1\") pod \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.900970 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph-nova-0\") pod \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.901070 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ssh-key-openstack-edpm-ipam\") pod \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.901094 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-1\") pod \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.901132 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-0\") pod \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.901160 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph\") pod \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.901203 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-0\") pod \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.901242 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-custom-ceph-combined-ca-bundle\") pod \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.901262 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr8rl\" (UniqueName: \"kubernetes.io/projected/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-kube-api-access-sr8rl\") pod \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.901286 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-inventory\") pod \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.901341 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-extra-config-0\") pod \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\" (UID: \"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7\") " Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.908927 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" (UID: "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.909532 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-kube-api-access-sr8rl" (OuterVolumeSpecName: "kube-api-access-sr8rl") pod "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" (UID: "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7"). InnerVolumeSpecName "kube-api-access-sr8rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.910045 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph" (OuterVolumeSpecName: "ceph") pod "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" (UID: "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.929288 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" (UID: "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.933218 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" (UID: "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.934220 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" (UID: "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.935121 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" (UID: "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.935495 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" (UID: "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.935674 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" (UID: "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.940831 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-inventory" (OuterVolumeSpecName: "inventory") pod "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" (UID: "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:15 crc kubenswrapper[4895]: I0129 17:07:15.941167 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" (UID: "6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.005058 4895 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.005404 4895 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.005494 4895 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.005584 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.005662 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.006764 4895 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.007116 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.007343 4895 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.007434 4895 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.008965 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr8rl\" (UniqueName: \"kubernetes.io/projected/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-kube-api-access-sr8rl\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.009082 4895 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.037603 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:07:16 crc kubenswrapper[4895]: E0129 17:07:16.037988 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.357978 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" event={"ID":"6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7","Type":"ContainerDied","Data":"db6674993eae984b0224ae2f303ed7d45e52caf833ea04ce3e17e8c98529d622"} Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.358029 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db6674993eae984b0224ae2f303ed7d45e52caf833ea04ce3e17e8c98529d622" Jan 29 17:07:16 crc kubenswrapper[4895]: I0129 17:07:16.358358 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4" Jan 29 17:07:23 crc kubenswrapper[4895]: E0129 17:07:23.039403 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:07:26 crc kubenswrapper[4895]: E0129 17:07:26.040138 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:07:29 crc kubenswrapper[4895]: I0129 17:07:29.037338 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:07:29 crc kubenswrapper[4895]: E0129 17:07:29.038391 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:07:30 crc kubenswrapper[4895]: E0129 17:07:30.038853 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.263499 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 29 17:07:31 crc kubenswrapper[4895]: E0129 17:07:31.264604 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.264635 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.264911 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.266207 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.270799 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.271065 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.274918 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.345612 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.347167 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.349581 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.361209 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381162 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e241f959-4c63-420a-9ab9-988ce0f2a46a-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381216 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381240 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381264 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381287 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381341 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381377 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381403 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5hn9\" (UniqueName: \"kubernetes.io/projected/e241f959-4c63-420a-9ab9-988ce0f2a46a-kube-api-access-s5hn9\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381425 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381454 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381486 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381536 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381567 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381605 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-run\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381666 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.381695 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.483678 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-dev\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.483718 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.483743 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.483770 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.483792 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.483816 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.483973 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484085 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-lib-modules\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484118 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e241f959-4c63-420a-9ab9-988ce0f2a46a-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484146 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-sys\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484180 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484209 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484239 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484271 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484265 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484306 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-run\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484331 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484334 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484463 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484515 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484517 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484570 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-config-data-custom\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484638 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484673 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5hn9\" (UniqueName: \"kubernetes.io/projected/e241f959-4c63-420a-9ab9-988ce0f2a46a-kube-api-access-s5hn9\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484703 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-etc-nvme\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484731 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484780 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484833 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-config-data\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484888 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.484984 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485019 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-ceph\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485056 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485094 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485147 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-scripts\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485181 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-run\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485214 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5426\" (UniqueName: \"kubernetes.io/projected/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-kube-api-access-l5426\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485241 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485485 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485486 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485506 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-run\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485532 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.485374 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e241f959-4c63-420a-9ab9-988ce0f2a46a-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.491360 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.491847 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.491910 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.492380 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e241f959-4c63-420a-9ab9-988ce0f2a46a-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.492889 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e241f959-4c63-420a-9ab9-988ce0f2a46a-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.505897 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5hn9\" (UniqueName: \"kubernetes.io/projected/e241f959-4c63-420a-9ab9-988ce0f2a46a-kube-api-access-s5hn9\") pod \"cinder-volume-volume1-0\" (UID: \"e241f959-4c63-420a-9ab9-988ce0f2a46a\") " pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587093 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-run\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587159 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587179 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-config-data-custom\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587207 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-etc-nvme\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587237 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-config-data\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587277 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-ceph\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587269 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-run\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587297 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587348 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587380 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-scripts\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587441 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5426\" (UniqueName: \"kubernetes.io/projected/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-kube-api-access-l5426\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587529 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587570 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-dev\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587590 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587607 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587632 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587650 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587719 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-lib-modules\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587737 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-sys\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.587905 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-sys\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.588022 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.588134 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.588227 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.588837 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-etc-nvme\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.588992 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-dev\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.589367 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-lib-modules\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.590817 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-config-data-custom\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.591511 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.591982 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-ceph\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.592804 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-config-data\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.593790 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-scripts\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.594435 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.616507 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5426\" (UniqueName: \"kubernetes.io/projected/fd7b84a0-a0ab-40b1-802f-3ec3279e1712-kube-api-access-l5426\") pod \"cinder-backup-0\" (UID: \"fd7b84a0-a0ab-40b1-802f-3ec3279e1712\") " pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.667313 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.752019 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-7pb72"] Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.753596 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7pb72" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.769363 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-7pb72"] Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.895026 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88eecffd-f694-4bce-b938-966e62335540-operator-scripts\") pod \"manila-db-create-7pb72\" (UID: \"88eecffd-f694-4bce-b938-966e62335540\") " pod="openstack/manila-db-create-7pb72" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.906342 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26chm\" (UniqueName: \"kubernetes.io/projected/88eecffd-f694-4bce-b938-966e62335540-kube-api-access-26chm\") pod \"manila-db-create-7pb72\" (UID: \"88eecffd-f694-4bce-b938-966e62335540\") " pod="openstack/manila-db-create-7pb72" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.906615 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-77da-account-create-update-jmwks"] Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.911106 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-77da-account-create-update-jmwks" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.921174 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.946713 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-77da-account-create-update-jmwks"] Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.984266 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84675dc6ff-mgf7b"] Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.986552 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.993435 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.993642 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-h4xbn" Jan 29 17:07:31 crc kubenswrapper[4895]: I0129 17:07:31.993747 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.002784 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.009551 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b531318-bcfa-4162-b306-3a293fa21814-horizon-secret-key\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.009618 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee8529db-c2f6-4976-a6ff-19c73dca11ab-operator-scripts\") pod \"manila-77da-account-create-update-jmwks\" (UID: \"ee8529db-c2f6-4976-a6ff-19c73dca11ab\") " pod="openstack/manila-77da-account-create-update-jmwks" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.009688 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88eecffd-f694-4bce-b938-966e62335540-operator-scripts\") pod \"manila-db-create-7pb72\" (UID: \"88eecffd-f694-4bce-b938-966e62335540\") " pod="openstack/manila-db-create-7pb72" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.009793 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chd45\" (UniqueName: \"kubernetes.io/projected/6b531318-bcfa-4162-b306-3a293fa21814-kube-api-access-chd45\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.009818 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26chm\" (UniqueName: \"kubernetes.io/projected/88eecffd-f694-4bce-b938-966e62335540-kube-api-access-26chm\") pod \"manila-db-create-7pb72\" (UID: \"88eecffd-f694-4bce-b938-966e62335540\") " pod="openstack/manila-db-create-7pb72" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.009848 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b531318-bcfa-4162-b306-3a293fa21814-logs\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.019459 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88eecffd-f694-4bce-b938-966e62335540-operator-scripts\") pod \"manila-db-create-7pb72\" (UID: \"88eecffd-f694-4bce-b938-966e62335540\") " pod="openstack/manila-db-create-7pb72" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.021473 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-scripts\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.021548 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-config-data\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.021642 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75xg2\" (UniqueName: \"kubernetes.io/projected/ee8529db-c2f6-4976-a6ff-19c73dca11ab-kube-api-access-75xg2\") pod \"manila-77da-account-create-update-jmwks\" (UID: \"ee8529db-c2f6-4976-a6ff-19c73dca11ab\") " pod="openstack/manila-77da-account-create-update-jmwks" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.037662 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84675dc6ff-mgf7b"] Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.061198 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26chm\" (UniqueName: \"kubernetes.io/projected/88eecffd-f694-4bce-b938-966e62335540-kube-api-access-26chm\") pod \"manila-db-create-7pb72\" (UID: \"88eecffd-f694-4bce-b938-966e62335540\") " pod="openstack/manila-db-create-7pb72" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.081422 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9d595c8b5-wgg2f"] Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.089551 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.102856 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7pb72" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123297 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cd366a77-4e31-4194-b0fb-2889144e8441-horizon-secret-key\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123370 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gklgk\" (UniqueName: \"kubernetes.io/projected/cd366a77-4e31-4194-b0fb-2889144e8441-kube-api-access-gklgk\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123405 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-config-data\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123470 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chd45\" (UniqueName: \"kubernetes.io/projected/6b531318-bcfa-4162-b306-3a293fa21814-kube-api-access-chd45\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123500 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b531318-bcfa-4162-b306-3a293fa21814-logs\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123549 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-scripts\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123579 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-config-data\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123622 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd366a77-4e31-4194-b0fb-2889144e8441-logs\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123644 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75xg2\" (UniqueName: \"kubernetes.io/projected/ee8529db-c2f6-4976-a6ff-19c73dca11ab-kube-api-access-75xg2\") pod \"manila-77da-account-create-update-jmwks\" (UID: \"ee8529db-c2f6-4976-a6ff-19c73dca11ab\") " pod="openstack/manila-77da-account-create-update-jmwks" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123702 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b531318-bcfa-4162-b306-3a293fa21814-horizon-secret-key\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123727 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-scripts\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.123763 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee8529db-c2f6-4976-a6ff-19c73dca11ab-operator-scripts\") pod \"manila-77da-account-create-update-jmwks\" (UID: \"ee8529db-c2f6-4976-a6ff-19c73dca11ab\") " pod="openstack/manila-77da-account-create-update-jmwks" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.125149 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-scripts\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.126844 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee8529db-c2f6-4976-a6ff-19c73dca11ab-operator-scripts\") pod \"manila-77da-account-create-update-jmwks\" (UID: \"ee8529db-c2f6-4976-a6ff-19c73dca11ab\") " pod="openstack/manila-77da-account-create-update-jmwks" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.130330 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-config-data\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.133019 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b531318-bcfa-4162-b306-3a293fa21814-logs\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.142224 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9d595c8b5-wgg2f"] Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.146287 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b531318-bcfa-4162-b306-3a293fa21814-horizon-secret-key\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.165734 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75xg2\" (UniqueName: \"kubernetes.io/projected/ee8529db-c2f6-4976-a6ff-19c73dca11ab-kube-api-access-75xg2\") pod \"manila-77da-account-create-update-jmwks\" (UID: \"ee8529db-c2f6-4976-a6ff-19c73dca11ab\") " pod="openstack/manila-77da-account-create-update-jmwks" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.165779 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chd45\" (UniqueName: \"kubernetes.io/projected/6b531318-bcfa-4162-b306-3a293fa21814-kube-api-access-chd45\") pod \"horizon-84675dc6ff-mgf7b\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.166032 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.172214 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.194282 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.194659 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.195766 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qb886" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.196278 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.204700 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.207047 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.214675 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.217759 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.221782 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226124 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226185 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqn7m\" (UniqueName: \"kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-kube-api-access-rqn7m\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226272 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-logs\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226293 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226325 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd366a77-4e31-4194-b0fb-2889144e8441-logs\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226360 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226392 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-scripts\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226433 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-ceph\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226494 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cd366a77-4e31-4194-b0fb-2889144e8441-horizon-secret-key\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226521 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226553 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226584 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gklgk\" (UniqueName: \"kubernetes.io/projected/cd366a77-4e31-4194-b0fb-2889144e8441-kube-api-access-gklgk\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226644 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-config-data\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.226681 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.227254 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd366a77-4e31-4194-b0fb-2889144e8441-logs\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.227925 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-scripts\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.230639 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.230991 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-config-data\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.240836 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:32 crc kubenswrapper[4895]: E0129 17:07:32.241818 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-sml2f logs public-tls-certs scripts], unattached volumes=[], failed to process volumes=[ceph combined-ca-bundle config-data glance httpd-run kube-api-access-sml2f logs public-tls-certs scripts]: context canceled" pod="openstack/glance-default-external-api-0" podUID="45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.242308 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cd366a77-4e31-4194-b0fb-2889144e8441-horizon-secret-key\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.248125 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gklgk\" (UniqueName: \"kubernetes.io/projected/cd366a77-4e31-4194-b0fb-2889144e8441-kube-api-access-gklgk\") pod \"horizon-9d595c8b5-wgg2f\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.294099 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-77da-account-create-update-jmwks" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328228 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-ceph\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328305 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328338 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328362 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328379 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328419 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328439 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-config-data\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328467 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328486 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-logs\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328505 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqn7m\" (UniqueName: \"kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-kube-api-access-rqn7m\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328524 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328547 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sml2f\" (UniqueName: \"kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-kube-api-access-sml2f\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328573 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-ceph\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328603 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328630 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-logs\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328647 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328672 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-scripts\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.328686 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.332328 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.334054 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-ceph\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.334690 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-logs\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.336219 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.340259 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.340701 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.341347 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.347410 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.351852 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqn7m\" (UniqueName: \"kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-kube-api-access-rqn7m\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.353984 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.392674 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.408477 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.433590 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-config-data\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.433855 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-logs\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.433940 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.433967 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sml2f\" (UniqueName: \"kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-kube-api-access-sml2f\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.433994 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-ceph\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.434028 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.434086 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-scripts\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.434188 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.434213 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.434370 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.442092 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-logs\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.442857 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-ceph\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.446745 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.447191 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-config-data\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.448172 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.457737 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.468002 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-scripts\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.470293 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.474835 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sml2f\" (UniqueName: \"kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-kube-api-access-sml2f\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.480088 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.500651 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.501682 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e241f959-4c63-420a-9ab9-988ce0f2a46a","Type":"ContainerStarted","Data":"b03a53604c70dfa64a9b5472b7b2e968705e42a6e0c86c4e9ab3cfcd0abdc2c1"} Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.553629 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.602570 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.643042 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.742652 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-scripts\") pod \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.742724 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-ceph\") pod \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.742786 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-combined-ca-bundle\") pod \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.742822 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-logs\") pod \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.742839 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sml2f\" (UniqueName: \"kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-kube-api-access-sml2f\") pod \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.742857 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.742890 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-config-data\") pod \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.742977 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-public-tls-certs\") pod \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.743018 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-httpd-run\") pod \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\" (UID: \"45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54\") " Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.743709 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-logs" (OuterVolumeSpecName: "logs") pod "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54" (UID: "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.743754 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54" (UID: "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.752042 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-7pb72"] Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.753764 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-scripts" (OuterVolumeSpecName: "scripts") pod "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54" (UID: "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.753845 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-kube-api-access-sml2f" (OuterVolumeSpecName: "kube-api-access-sml2f") pod "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54" (UID: "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54"). InnerVolumeSpecName "kube-api-access-sml2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.754229 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54" (UID: "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.755010 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54" (UID: "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.755485 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-ceph" (OuterVolumeSpecName: "ceph") pod "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54" (UID: "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.756264 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54" (UID: "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.757047 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-config-data" (OuterVolumeSpecName: "config-data") pod "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54" (UID: "45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.847058 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.847479 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.847489 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.847499 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.847510 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sml2f\" (UniqueName: \"kubernetes.io/projected/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-kube-api-access-sml2f\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.847542 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.847552 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.847561 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.847571 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.877587 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.881787 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-77da-account-create-update-jmwks"] Jan 29 17:07:32 crc kubenswrapper[4895]: I0129 17:07:32.949900 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.086450 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84675dc6ff-mgf7b"] Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.160822 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9d595c8b5-wgg2f"] Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.249761 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.514541 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82581b12-b974-4da4-9a9b-8842de4faddb","Type":"ContainerStarted","Data":"1eb99c50b85379ae17212408d6f5f50f9481ec6262dd10f7e18e26ec82bf5d71"} Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.516643 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"fd7b84a0-a0ab-40b1-802f-3ec3279e1712","Type":"ContainerStarted","Data":"5cfafdae6fb2419eac0ff3a4175dc56d1885d919bd637c0d8ab45ae8a53cd60d"} Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.525235 4895 generic.go:334] "Generic (PLEG): container finished" podID="ee8529db-c2f6-4976-a6ff-19c73dca11ab" containerID="a23e5a5fa8b8e9072e075e8f25c7d5368f2abd57f83649443d23374eee2523bd" exitCode=0 Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.525276 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-77da-account-create-update-jmwks" event={"ID":"ee8529db-c2f6-4976-a6ff-19c73dca11ab","Type":"ContainerDied","Data":"a23e5a5fa8b8e9072e075e8f25c7d5368f2abd57f83649443d23374eee2523bd"} Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.525311 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-77da-account-create-update-jmwks" event={"ID":"ee8529db-c2f6-4976-a6ff-19c73dca11ab","Type":"ContainerStarted","Data":"0ae9fa90abf1962d89030c9be7626a091d4aaf38695569f063785b84d0c004d1"} Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.530104 4895 generic.go:334] "Generic (PLEG): container finished" podID="88eecffd-f694-4bce-b938-966e62335540" containerID="89fcd8b854c27057381b74aa48e610291a1d6b638ea24b24b0b5eb9f2e397ffd" exitCode=0 Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.530225 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-7pb72" event={"ID":"88eecffd-f694-4bce-b938-966e62335540","Type":"ContainerDied","Data":"89fcd8b854c27057381b74aa48e610291a1d6b638ea24b24b0b5eb9f2e397ffd"} Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.530255 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-7pb72" event={"ID":"88eecffd-f694-4bce-b938-966e62335540","Type":"ContainerStarted","Data":"b96a14f68c030c2155666d7d2df19442c92c7b8befb8a3c65739446a772a6562"} Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.532939 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d595c8b5-wgg2f" event={"ID":"cd366a77-4e31-4194-b0fb-2889144e8441","Type":"ContainerStarted","Data":"e550434befa2dd0229ffdadcba1c4b567e81794d0de2689adbbabe6d48aacc18"} Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.534755 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84675dc6ff-mgf7b" event={"ID":"6b531318-bcfa-4162-b306-3a293fa21814","Type":"ContainerStarted","Data":"a521c82d95ceeae518f806d19c820d0526e69de1b39752409f3cc10e619cf6b2"} Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.534829 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.673797 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.682626 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.698151 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.700507 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.705368 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.705368 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.726051 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.887672 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.888172 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.888208 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h26nq\" (UniqueName: \"kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-kube-api-access-h26nq\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.888267 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.888451 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.888491 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-logs\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.888517 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-scripts\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.888572 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-ceph\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.888677 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-config-data\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.990918 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.991036 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-logs\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.991127 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-scripts\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.991186 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-ceph\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.991220 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-config-data\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.991286 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.991312 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.991362 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h26nq\" (UniqueName: \"kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-kube-api-access-h26nq\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.991392 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.996404 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.996742 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-logs\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.996935 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 29 17:07:33 crc kubenswrapper[4895]: I0129 17:07:33.997981 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-ceph\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.001099 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-config-data\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.003679 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-scripts\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.004089 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.005656 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.032111 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h26nq\" (UniqueName: \"kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-kube-api-access-h26nq\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.069908 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.334538 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.562909 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82581b12-b974-4da4-9a9b-8842de4faddb","Type":"ContainerStarted","Data":"0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4"} Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.566782 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"fd7b84a0-a0ab-40b1-802f-3ec3279e1712","Type":"ContainerStarted","Data":"f3302f1ba0bef1bf0f7ae8d0163be5fc8864ede5335ca6e0fc3fb6d45cd38dff"} Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.566823 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"fd7b84a0-a0ab-40b1-802f-3ec3279e1712","Type":"ContainerStarted","Data":"2e99ab6312d3dcf7df91eb878b6029dd14e4667c2bf264ebdce50893bfc90892"} Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.598978 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e241f959-4c63-420a-9ab9-988ce0f2a46a","Type":"ContainerStarted","Data":"26b673d0ce5a647e3f292c74687313fa2578ca8fbd59b4d1ec3fa0cebce40b72"} Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.599024 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e241f959-4c63-420a-9ab9-988ce0f2a46a","Type":"ContainerStarted","Data":"4905ef9c4795772a166ac63ed3cc1eca5d883e2915aecfccc58ff7f0b29e2778"} Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.607267 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.449973842 podStartE2EDuration="3.607243913s" podCreationTimestamp="2026-01-29 17:07:31 +0000 UTC" firstStartedPulling="2026-01-29 17:07:32.654916065 +0000 UTC m=+3336.457893329" lastFinishedPulling="2026-01-29 17:07:33.812186136 +0000 UTC m=+3337.615163400" observedRunningTime="2026-01-29 17:07:34.598324241 +0000 UTC m=+3338.401301515" watchObservedRunningTime="2026-01-29 17:07:34.607243913 +0000 UTC m=+3338.410221187" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.636225 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.672228539 podStartE2EDuration="3.636202548s" podCreationTimestamp="2026-01-29 17:07:31 +0000 UTC" firstStartedPulling="2026-01-29 17:07:32.474740437 +0000 UTC m=+3336.277717701" lastFinishedPulling="2026-01-29 17:07:33.438714446 +0000 UTC m=+3337.241691710" observedRunningTime="2026-01-29 17:07:34.635278763 +0000 UTC m=+3338.438256047" watchObservedRunningTime="2026-01-29 17:07:34.636202548 +0000 UTC m=+3338.439179812" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.758408 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84675dc6ff-mgf7b"] Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.879921 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.896909 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66d48bc97b-rb85g"] Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.898731 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.906802 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.914784 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66d48bc97b-rb85g"] Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.949973 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9d595c8b5-wgg2f"] Jan 29 17:07:34 crc kubenswrapper[4895]: I0129 17:07:34.978539 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.011437 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bb5cc9d-przr2"] Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.014477 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.064697 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-secret-key\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.064825 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-scripts\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.064881 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498b6cd8-82b2-47bf-ac98-612780a6a4f9-logs\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.065142 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-config-data\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.065200 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-combined-ca-bundle\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.065538 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcxwc\" (UniqueName: \"kubernetes.io/projected/498b6cd8-82b2-47bf-ac98-612780a6a4f9-kube-api-access-fcxwc\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.065569 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-tls-certs\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.205804 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54" path="/var/lib/kubelet/pods/45f1d5fc-8509-4ade-b4fc-ecd5e56cbd54/volumes" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.226253 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bb5cc9d-przr2"] Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.230203 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7823fb45-6935-459a-a1c9-7723a2f52136-combined-ca-bundle\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.230517 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcxwc\" (UniqueName: \"kubernetes.io/projected/498b6cd8-82b2-47bf-ac98-612780a6a4f9-kube-api-access-fcxwc\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.257119 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7823fb45-6935-459a-a1c9-7723a2f52136-logs\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.257445 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-tls-certs\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.257547 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-secret-key\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.257625 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7823fb45-6935-459a-a1c9-7723a2f52136-config-data\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.257750 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-scripts\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.257833 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498b6cd8-82b2-47bf-ac98-612780a6a4f9-logs\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.257937 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7823fb45-6935-459a-a1c9-7723a2f52136-scripts\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.258033 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7823fb45-6935-459a-a1c9-7723a2f52136-horizon-secret-key\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.258196 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlplz\" (UniqueName: \"kubernetes.io/projected/7823fb45-6935-459a-a1c9-7723a2f52136-kube-api-access-hlplz\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.258351 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-config-data\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.258472 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7823fb45-6935-459a-a1c9-7723a2f52136-horizon-tls-certs\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.258574 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-combined-ca-bundle\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.274029 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-scripts\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.275342 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498b6cd8-82b2-47bf-ac98-612780a6a4f9-logs\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.285261 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-config-data\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.313571 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-combined-ca-bundle\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.355490 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcxwc\" (UniqueName: \"kubernetes.io/projected/498b6cd8-82b2-47bf-ac98-612780a6a4f9-kube-api-access-fcxwc\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.373373 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7823fb45-6935-459a-a1c9-7723a2f52136-combined-ca-bundle\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.373480 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7823fb45-6935-459a-a1c9-7723a2f52136-logs\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.373532 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7823fb45-6935-459a-a1c9-7723a2f52136-config-data\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.373582 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7823fb45-6935-459a-a1c9-7723a2f52136-scripts\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.373617 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7823fb45-6935-459a-a1c9-7723a2f52136-horizon-secret-key\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.373854 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlplz\" (UniqueName: \"kubernetes.io/projected/7823fb45-6935-459a-a1c9-7723a2f52136-kube-api-access-hlplz\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.378030 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7823fb45-6935-459a-a1c9-7723a2f52136-config-data\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.380844 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7823fb45-6935-459a-a1c9-7723a2f52136-logs\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.383778 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7823fb45-6935-459a-a1c9-7723a2f52136-scripts\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.398713 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7823fb45-6935-459a-a1c9-7723a2f52136-combined-ca-bundle\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.399756 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7823fb45-6935-459a-a1c9-7723a2f52136-horizon-tls-certs\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.405603 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7823fb45-6935-459a-a1c9-7723a2f52136-horizon-secret-key\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.409020 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-tls-certs\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.409407 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7823fb45-6935-459a-a1c9-7723a2f52136-horizon-tls-certs\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.461386 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-secret-key\") pod \"horizon-66d48bc97b-rb85g\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.500018 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlplz\" (UniqueName: \"kubernetes.io/projected/7823fb45-6935-459a-a1c9-7723a2f52136-kube-api-access-hlplz\") pod \"horizon-5bb5cc9d-przr2\" (UID: \"7823fb45-6935-459a-a1c9-7723a2f52136\") " pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.509655 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.556236 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.569488 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.652704 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-77da-account-create-update-jmwks" event={"ID":"ee8529db-c2f6-4976-a6ff-19c73dca11ab","Type":"ContainerDied","Data":"0ae9fa90abf1962d89030c9be7626a091d4aaf38695569f063785b84d0c004d1"} Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.652757 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ae9fa90abf1962d89030c9be7626a091d4aaf38695569f063785b84d0c004d1" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.671807 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-7pb72" event={"ID":"88eecffd-f694-4bce-b938-966e62335540","Type":"ContainerDied","Data":"b96a14f68c030c2155666d7d2df19442c92c7b8befb8a3c65739446a772a6562"} Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.671855 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b96a14f68c030c2155666d7d2df19442c92c7b8befb8a3c65739446a772a6562" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.692415 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7pb72" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.811317 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26chm\" (UniqueName: \"kubernetes.io/projected/88eecffd-f694-4bce-b938-966e62335540-kube-api-access-26chm\") pod \"88eecffd-f694-4bce-b938-966e62335540\" (UID: \"88eecffd-f694-4bce-b938-966e62335540\") " Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.812087 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88eecffd-f694-4bce-b938-966e62335540-operator-scripts\") pod \"88eecffd-f694-4bce-b938-966e62335540\" (UID: \"88eecffd-f694-4bce-b938-966e62335540\") " Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.812816 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88eecffd-f694-4bce-b938-966e62335540-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88eecffd-f694-4bce-b938-966e62335540" (UID: "88eecffd-f694-4bce-b938-966e62335540"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.825483 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-77da-account-create-update-jmwks" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.830196 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88eecffd-f694-4bce-b938-966e62335540-kube-api-access-26chm" (OuterVolumeSpecName: "kube-api-access-26chm") pod "88eecffd-f694-4bce-b938-966e62335540" (UID: "88eecffd-f694-4bce-b938-966e62335540"). InnerVolumeSpecName "kube-api-access-26chm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.914269 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26chm\" (UniqueName: \"kubernetes.io/projected/88eecffd-f694-4bce-b938-966e62335540-kube-api-access-26chm\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:35 crc kubenswrapper[4895]: I0129 17:07:35.914653 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88eecffd-f694-4bce-b938-966e62335540-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.016601 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee8529db-c2f6-4976-a6ff-19c73dca11ab-operator-scripts\") pod \"ee8529db-c2f6-4976-a6ff-19c73dca11ab\" (UID: \"ee8529db-c2f6-4976-a6ff-19c73dca11ab\") " Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.016640 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75xg2\" (UniqueName: \"kubernetes.io/projected/ee8529db-c2f6-4976-a6ff-19c73dca11ab-kube-api-access-75xg2\") pod \"ee8529db-c2f6-4976-a6ff-19c73dca11ab\" (UID: \"ee8529db-c2f6-4976-a6ff-19c73dca11ab\") " Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.017800 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee8529db-c2f6-4976-a6ff-19c73dca11ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee8529db-c2f6-4976-a6ff-19c73dca11ab" (UID: "ee8529db-c2f6-4976-a6ff-19c73dca11ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.024362 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8529db-c2f6-4976-a6ff-19c73dca11ab-kube-api-access-75xg2" (OuterVolumeSpecName: "kube-api-access-75xg2") pod "ee8529db-c2f6-4976-a6ff-19c73dca11ab" (UID: "ee8529db-c2f6-4976-a6ff-19c73dca11ab"). InnerVolumeSpecName "kube-api-access-75xg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:36 crc kubenswrapper[4895]: E0129 17:07:36.056185 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.120349 4895 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee8529db-c2f6-4976-a6ff-19c73dca11ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.120385 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75xg2\" (UniqueName: \"kubernetes.io/projected/ee8529db-c2f6-4976-a6ff-19c73dca11ab-kube-api-access-75xg2\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.230604 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bb5cc9d-przr2"] Jan 29 17:07:36 crc kubenswrapper[4895]: W0129 17:07:36.237439 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7823fb45_6935_459a_a1c9_7723a2f52136.slice/crio-66bda76b9c2cd0f9c071ae85359e8fdfe1fafc69775f600a648c6c33abe04510 WatchSource:0}: Error finding container 66bda76b9c2cd0f9c071ae85359e8fdfe1fafc69775f600a648c6c33abe04510: Status 404 returned error can't find the container with id 66bda76b9c2cd0f9c071ae85359e8fdfe1fafc69775f600a648c6c33abe04510 Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.405882 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66d48bc97b-rb85g"] Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.595929 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.668074 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.695092 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82581b12-b974-4da4-9a9b-8842de4faddb","Type":"ContainerStarted","Data":"87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872"} Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.695256 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="82581b12-b974-4da4-9a9b-8842de4faddb" containerName="glance-log" containerID="cri-o://0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4" gracePeriod=30 Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.695732 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="82581b12-b974-4da4-9a9b-8842de4faddb" containerName="glance-httpd" containerID="cri-o://87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872" gracePeriod=30 Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.700135 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bb5cc9d-przr2" event={"ID":"7823fb45-6935-459a-a1c9-7723a2f52136","Type":"ContainerStarted","Data":"66bda76b9c2cd0f9c071ae85359e8fdfe1fafc69775f600a648c6c33abe04510"} Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.703249 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8df53265-0efe-467c-b49c-bf7ed46c998b","Type":"ContainerStarted","Data":"4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2"} Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.703474 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8df53265-0efe-467c-b49c-bf7ed46c998b","Type":"ContainerStarted","Data":"73ce78b3a6253087e9ef9bc124aa614e6c4aec030a46ce9cf2b040cb329793b9"} Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.725338 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-7pb72" Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.727361 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d48bc97b-rb85g" event={"ID":"498b6cd8-82b2-47bf-ac98-612780a6a4f9","Type":"ContainerStarted","Data":"608d48ae23c2e4a3b64a00b0eeede284c472002d9b6cdda0a11d93681bfef26b"} Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.727463 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-77da-account-create-update-jmwks" Jan 29 17:07:36 crc kubenswrapper[4895]: I0129 17:07:36.769371 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.769352112 podStartE2EDuration="5.769352112s" podCreationTimestamp="2026-01-29 17:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:36.726189071 +0000 UTC m=+3340.529166335" watchObservedRunningTime="2026-01-29 17:07:36.769352112 +0000 UTC m=+3340.572329376" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.410605 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-gmsgc"] Jan 29 17:07:37 crc kubenswrapper[4895]: E0129 17:07:37.411825 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8529db-c2f6-4976-a6ff-19c73dca11ab" containerName="mariadb-account-create-update" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.411842 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8529db-c2f6-4976-a6ff-19c73dca11ab" containerName="mariadb-account-create-update" Jan 29 17:07:37 crc kubenswrapper[4895]: E0129 17:07:37.411894 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88eecffd-f694-4bce-b938-966e62335540" containerName="mariadb-database-create" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.411902 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="88eecffd-f694-4bce-b938-966e62335540" containerName="mariadb-database-create" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.412116 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8529db-c2f6-4976-a6ff-19c73dca11ab" containerName="mariadb-account-create-update" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.412130 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="88eecffd-f694-4bce-b938-966e62335540" containerName="mariadb-database-create" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.413055 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.417661 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.417944 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-t4vr6" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.427796 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-gmsgc"] Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.557514 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.572052 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqn7m\" (UniqueName: \"kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-kube-api-access-rqn7m\") pod \"82581b12-b974-4da4-9a9b-8842de4faddb\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.572173 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"82581b12-b974-4da4-9a9b-8842de4faddb\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.572229 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-scripts\") pod \"82581b12-b974-4da4-9a9b-8842de4faddb\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.572277 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-internal-tls-certs\") pod \"82581b12-b974-4da4-9a9b-8842de4faddb\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.572407 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-httpd-run\") pod \"82581b12-b974-4da4-9a9b-8842de4faddb\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.572515 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-ceph\") pod \"82581b12-b974-4da4-9a9b-8842de4faddb\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.572565 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-logs\") pod \"82581b12-b974-4da4-9a9b-8842de4faddb\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.572653 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-combined-ca-bundle\") pod \"82581b12-b974-4da4-9a9b-8842de4faddb\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.572742 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-config-data\") pod \"82581b12-b974-4da4-9a9b-8842de4faddb\" (UID: \"82581b12-b974-4da4-9a9b-8842de4faddb\") " Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.573127 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8bg\" (UniqueName: \"kubernetes.io/projected/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-kube-api-access-np8bg\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.573213 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-combined-ca-bundle\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.573255 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-config-data\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.573373 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-job-config-data\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.573485 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-logs" (OuterVolumeSpecName: "logs") pod "82581b12-b974-4da4-9a9b-8842de4faddb" (UID: "82581b12-b974-4da4-9a9b-8842de4faddb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.573745 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "82581b12-b974-4da4-9a9b-8842de4faddb" (UID: "82581b12-b974-4da4-9a9b-8842de4faddb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.579226 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-ceph" (OuterVolumeSpecName: "ceph") pod "82581b12-b974-4da4-9a9b-8842de4faddb" (UID: "82581b12-b974-4da4-9a9b-8842de4faddb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.590026 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "82581b12-b974-4da4-9a9b-8842de4faddb" (UID: "82581b12-b974-4da4-9a9b-8842de4faddb"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.602991 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-kube-api-access-rqn7m" (OuterVolumeSpecName: "kube-api-access-rqn7m") pod "82581b12-b974-4da4-9a9b-8842de4faddb" (UID: "82581b12-b974-4da4-9a9b-8842de4faddb"). InnerVolumeSpecName "kube-api-access-rqn7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.603088 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-scripts" (OuterVolumeSpecName: "scripts") pod "82581b12-b974-4da4-9a9b-8842de4faddb" (UID: "82581b12-b974-4da4-9a9b-8842de4faddb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.689197 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np8bg\" (UniqueName: \"kubernetes.io/projected/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-kube-api-access-np8bg\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.689604 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-combined-ca-bundle\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.689653 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-config-data\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.689886 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-job-config-data\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.690254 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.690269 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqn7m\" (UniqueName: \"kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-kube-api-access-rqn7m\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.690299 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.690309 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.690320 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82581b12-b974-4da4-9a9b-8842de4faddb-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.690332 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82581b12-b974-4da4-9a9b-8842de4faddb-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.706111 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82581b12-b974-4da4-9a9b-8842de4faddb" (UID: "82581b12-b974-4da4-9a9b-8842de4faddb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.706158 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "82581b12-b974-4da4-9a9b-8842de4faddb" (UID: "82581b12-b974-4da4-9a9b-8842de4faddb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.706215 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-combined-ca-bundle\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.706407 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-config-data\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.707944 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-job-config-data\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.715232 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np8bg\" (UniqueName: \"kubernetes.io/projected/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-kube-api-access-np8bg\") pod \"manila-db-sync-gmsgc\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.741516 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.757258 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8df53265-0efe-467c-b49c-bf7ed46c998b","Type":"ContainerStarted","Data":"68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3"} Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.757426 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8df53265-0efe-467c-b49c-bf7ed46c998b" containerName="glance-log" containerID="cri-o://4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2" gracePeriod=30 Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.757513 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8df53265-0efe-467c-b49c-bf7ed46c998b" containerName="glance-httpd" containerID="cri-o://68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3" gracePeriod=30 Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.763737 4895 generic.go:334] "Generic (PLEG): container finished" podID="82581b12-b974-4da4-9a9b-8842de4faddb" containerID="87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872" exitCode=0 Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.763789 4895 generic.go:334] "Generic (PLEG): container finished" podID="82581b12-b974-4da4-9a9b-8842de4faddb" containerID="0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4" exitCode=143 Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.763811 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82581b12-b974-4da4-9a9b-8842de4faddb","Type":"ContainerDied","Data":"87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872"} Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.763837 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82581b12-b974-4da4-9a9b-8842de4faddb","Type":"ContainerDied","Data":"0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4"} Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.763874 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"82581b12-b974-4da4-9a9b-8842de4faddb","Type":"ContainerDied","Data":"1eb99c50b85379ae17212408d6f5f50f9481ec6262dd10f7e18e26ec82bf5d71"} Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.763894 4895 scope.go:117] "RemoveContainer" containerID="87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.764113 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.766477 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-gmsgc" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.793565 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.793547114 podStartE2EDuration="4.793547114s" podCreationTimestamp="2026-01-29 17:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:37.786065341 +0000 UTC m=+3341.589042605" watchObservedRunningTime="2026-01-29 17:07:37.793547114 +0000 UTC m=+3341.596524378" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.797942 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.797977 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.797988 4895 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.801514 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-config-data" (OuterVolumeSpecName: "config-data") pod "82581b12-b974-4da4-9a9b-8842de4faddb" (UID: "82581b12-b974-4da4-9a9b-8842de4faddb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.901094 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82581b12-b974-4da4-9a9b-8842de4faddb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:37 crc kubenswrapper[4895]: I0129 17:07:37.973702 4895 scope.go:117] "RemoveContainer" containerID="0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.017202 4895 scope.go:117] "RemoveContainer" containerID="87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872" Jan 29 17:07:38 crc kubenswrapper[4895]: E0129 17:07:38.018178 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872\": container with ID starting with 87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872 not found: ID does not exist" containerID="87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.018224 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872"} err="failed to get container status \"87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872\": rpc error: code = NotFound desc = could not find container \"87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872\": container with ID starting with 87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872 not found: ID does not exist" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.018255 4895 scope.go:117] "RemoveContainer" containerID="0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4" Jan 29 17:07:38 crc kubenswrapper[4895]: E0129 17:07:38.018842 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4\": container with ID starting with 0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4 not found: ID does not exist" containerID="0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.018859 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4"} err="failed to get container status \"0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4\": rpc error: code = NotFound desc = could not find container \"0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4\": container with ID starting with 0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4 not found: ID does not exist" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.018892 4895 scope.go:117] "RemoveContainer" containerID="87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.019179 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872"} err="failed to get container status \"87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872\": rpc error: code = NotFound desc = could not find container \"87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872\": container with ID starting with 87bc3070f157aa2690e3a6d2521e818ceff27dec4d863b98e9fdf4b7488aa872 not found: ID does not exist" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.019213 4895 scope.go:117] "RemoveContainer" containerID="0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.019651 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4"} err="failed to get container status \"0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4\": rpc error: code = NotFound desc = could not find container \"0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4\": container with ID starting with 0ebcee6350843f7f96e2bb8fd71eb5ead35aeb235852a820a526cc636c3eddb4 not found: ID does not exist" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.142016 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.152351 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.169645 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:07:38 crc kubenswrapper[4895]: E0129 17:07:38.170097 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82581b12-b974-4da4-9a9b-8842de4faddb" containerName="glance-log" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.170110 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="82581b12-b974-4da4-9a9b-8842de4faddb" containerName="glance-log" Jan 29 17:07:38 crc kubenswrapper[4895]: E0129 17:07:38.170137 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82581b12-b974-4da4-9a9b-8842de4faddb" containerName="glance-httpd" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.170143 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="82581b12-b974-4da4-9a9b-8842de4faddb" containerName="glance-httpd" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.170330 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="82581b12-b974-4da4-9a9b-8842de4faddb" containerName="glance-httpd" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.170349 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="82581b12-b974-4da4-9a9b-8842de4faddb" containerName="glance-log" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.171370 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.177847 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.178847 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.192966 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:07:38 crc kubenswrapper[4895]: E0129 17:07:38.198646 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82581b12_b974_4da4_9a9b_8842de4faddb.slice/crio-1eb99c50b85379ae17212408d6f5f50f9481ec6262dd10f7e18e26ec82bf5d71\": RecentStats: unable to find data in memory cache]" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.208580 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.208699 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.208744 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d710098a-e10c-427b-8bdb-bb9cfad0376d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.208781 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d710098a-e10c-427b-8bdb-bb9cfad0376d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.208803 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.208823 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.208950 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p64wn\" (UniqueName: \"kubernetes.io/projected/d710098a-e10c-427b-8bdb-bb9cfad0376d-kube-api-access-p64wn\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.208977 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.209015 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d710098a-e10c-427b-8bdb-bb9cfad0376d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.312536 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d710098a-e10c-427b-8bdb-bb9cfad0376d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.312601 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d710098a-e10c-427b-8bdb-bb9cfad0376d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.312626 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.312649 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.312716 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p64wn\" (UniqueName: \"kubernetes.io/projected/d710098a-e10c-427b-8bdb-bb9cfad0376d-kube-api-access-p64wn\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.312745 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.312787 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d710098a-e10c-427b-8bdb-bb9cfad0376d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.312844 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.312900 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.317564 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d710098a-e10c-427b-8bdb-bb9cfad0376d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.323281 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d710098a-e10c-427b-8bdb-bb9cfad0376d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.323830 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.322559 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.326287 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.331858 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.335573 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d710098a-e10c-427b-8bdb-bb9cfad0376d-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.339749 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p64wn\" (UniqueName: \"kubernetes.io/projected/d710098a-e10c-427b-8bdb-bb9cfad0376d-kube-api-access-p64wn\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.342056 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d710098a-e10c-427b-8bdb-bb9cfad0376d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.393238 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"d710098a-e10c-427b-8bdb-bb9cfad0376d\") " pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.486226 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-gmsgc"] Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.520520 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.559029 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.728521 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h26nq\" (UniqueName: \"kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-kube-api-access-h26nq\") pod \"8df53265-0efe-467c-b49c-bf7ed46c998b\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.729046 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"8df53265-0efe-467c-b49c-bf7ed46c998b\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.729104 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-combined-ca-bundle\") pod \"8df53265-0efe-467c-b49c-bf7ed46c998b\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.729135 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-config-data\") pod \"8df53265-0efe-467c-b49c-bf7ed46c998b\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.729156 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-ceph\") pod \"8df53265-0efe-467c-b49c-bf7ed46c998b\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.729232 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-logs\") pod \"8df53265-0efe-467c-b49c-bf7ed46c998b\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.729278 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-httpd-run\") pod \"8df53265-0efe-467c-b49c-bf7ed46c998b\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.729334 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-scripts\") pod \"8df53265-0efe-467c-b49c-bf7ed46c998b\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.729352 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-public-tls-certs\") pod \"8df53265-0efe-467c-b49c-bf7ed46c998b\" (UID: \"8df53265-0efe-467c-b49c-bf7ed46c998b\") " Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.730789 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8df53265-0efe-467c-b49c-bf7ed46c998b" (UID: "8df53265-0efe-467c-b49c-bf7ed46c998b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.731124 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-logs" (OuterVolumeSpecName: "logs") pod "8df53265-0efe-467c-b49c-bf7ed46c998b" (UID: "8df53265-0efe-467c-b49c-bf7ed46c998b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.787296 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-ceph" (OuterVolumeSpecName: "ceph") pod "8df53265-0efe-467c-b49c-bf7ed46c998b" (UID: "8df53265-0efe-467c-b49c-bf7ed46c998b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.794238 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-scripts" (OuterVolumeSpecName: "scripts") pod "8df53265-0efe-467c-b49c-bf7ed46c998b" (UID: "8df53265-0efe-467c-b49c-bf7ed46c998b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.797338 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "8df53265-0efe-467c-b49c-bf7ed46c998b" (UID: "8df53265-0efe-467c-b49c-bf7ed46c998b"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.817372 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-kube-api-access-h26nq" (OuterVolumeSpecName: "kube-api-access-h26nq") pod "8df53265-0efe-467c-b49c-bf7ed46c998b" (UID: "8df53265-0efe-467c-b49c-bf7ed46c998b"). InnerVolumeSpecName "kube-api-access-h26nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.833217 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.833253 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.833262 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.833272 4895 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8df53265-0efe-467c-b49c-bf7ed46c998b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.833281 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.833289 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h26nq\" (UniqueName: \"kubernetes.io/projected/8df53265-0efe-467c-b49c-bf7ed46c998b-kube-api-access-h26nq\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.920306 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8df53265-0efe-467c-b49c-bf7ed46c998b" (UID: "8df53265-0efe-467c-b49c-bf7ed46c998b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.922181 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-gmsgc" event={"ID":"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd","Type":"ContainerStarted","Data":"beb8f28abb6938dfb56f13ec3fa44e3c37effb94bb9ff7cc0443e55c7f2a8fef"} Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.934948 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.960174 4895 generic.go:334] "Generic (PLEG): container finished" podID="8df53265-0efe-467c-b49c-bf7ed46c998b" containerID="68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3" exitCode=143 Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.960219 4895 generic.go:334] "Generic (PLEG): container finished" podID="8df53265-0efe-467c-b49c-bf7ed46c998b" containerID="4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2" exitCode=143 Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.960245 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8df53265-0efe-467c-b49c-bf7ed46c998b","Type":"ContainerDied","Data":"68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3"} Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.960277 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8df53265-0efe-467c-b49c-bf7ed46c998b","Type":"ContainerDied","Data":"4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2"} Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.960291 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8df53265-0efe-467c-b49c-bf7ed46c998b","Type":"ContainerDied","Data":"73ce78b3a6253087e9ef9bc124aa614e6c4aec030a46ce9cf2b040cb329793b9"} Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.960307 4895 scope.go:117] "RemoveContainer" containerID="68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.960524 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:07:38 crc kubenswrapper[4895]: I0129 17:07:38.993097 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.020013 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8df53265-0efe-467c-b49c-bf7ed46c998b" (UID: "8df53265-0efe-467c-b49c-bf7ed46c998b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.029281 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-config-data" (OuterVolumeSpecName: "config-data") pod "8df53265-0efe-467c-b49c-bf7ed46c998b" (UID: "8df53265-0efe-467c-b49c-bf7ed46c998b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.040542 4895 scope.go:117] "RemoveContainer" containerID="4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.053906 4895 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.053942 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.053962 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df53265-0efe-467c-b49c-bf7ed46c998b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:39 crc kubenswrapper[4895]: E0129 17:07:39.064654 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.086741 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82581b12-b974-4da4-9a9b-8842de4faddb" path="/var/lib/kubelet/pods/82581b12-b974-4da4-9a9b-8842de4faddb/volumes" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.129731 4895 scope.go:117] "RemoveContainer" containerID="68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3" Jan 29 17:07:39 crc kubenswrapper[4895]: E0129 17:07:39.145932 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3\": container with ID starting with 68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3 not found: ID does not exist" containerID="68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.145993 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3"} err="failed to get container status \"68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3\": rpc error: code = NotFound desc = could not find container \"68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3\": container with ID starting with 68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3 not found: ID does not exist" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.146027 4895 scope.go:117] "RemoveContainer" containerID="4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2" Jan 29 17:07:39 crc kubenswrapper[4895]: E0129 17:07:39.150671 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2\": container with ID starting with 4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2 not found: ID does not exist" containerID="4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.150736 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2"} err="failed to get container status \"4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2\": rpc error: code = NotFound desc = could not find container \"4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2\": container with ID starting with 4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2 not found: ID does not exist" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.150770 4895 scope.go:117] "RemoveContainer" containerID="68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.164382 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3"} err="failed to get container status \"68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3\": rpc error: code = NotFound desc = could not find container \"68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3\": container with ID starting with 68f99d0d88f8a2a876715fba3e70a807dfa81b9d4faa9ab11cfe5937838dfea3 not found: ID does not exist" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.164496 4895 scope.go:117] "RemoveContainer" containerID="4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.165844 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2"} err="failed to get container status \"4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2\": rpc error: code = NotFound desc = could not find container \"4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2\": container with ID starting with 4d3175649c7d2eef024bb8ec421f4766d86b5cb1278f0afd77a504361c7693d2 not found: ID does not exist" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.199391 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.316153 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.347979 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.372055 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:39 crc kubenswrapper[4895]: E0129 17:07:39.372633 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df53265-0efe-467c-b49c-bf7ed46c998b" containerName="glance-log" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.372662 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df53265-0efe-467c-b49c-bf7ed46c998b" containerName="glance-log" Jan 29 17:07:39 crc kubenswrapper[4895]: E0129 17:07:39.372696 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df53265-0efe-467c-b49c-bf7ed46c998b" containerName="glance-httpd" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.372706 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df53265-0efe-467c-b49c-bf7ed46c998b" containerName="glance-httpd" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.373028 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df53265-0efe-467c-b49c-bf7ed46c998b" containerName="glance-log" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.373070 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df53265-0efe-467c-b49c-bf7ed46c998b" containerName="glance-httpd" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.374429 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.376930 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.378266 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.393474 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.471353 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef22134-e2b5-45b3-87bb-1b061d3834d2-logs\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.472196 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3ef22134-e2b5-45b3-87bb-1b061d3834d2-ceph\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.472359 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.472496 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.472618 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.472780 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b65c\" (UniqueName: \"kubernetes.io/projected/3ef22134-e2b5-45b3-87bb-1b061d3834d2-kube-api-access-7b65c\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.472849 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef22134-e2b5-45b3-87bb-1b061d3834d2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.473102 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.473158 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.576439 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3ef22134-e2b5-45b3-87bb-1b061d3834d2-ceph\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.576504 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.576540 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.576580 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.576620 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b65c\" (UniqueName: \"kubernetes.io/projected/3ef22134-e2b5-45b3-87bb-1b061d3834d2-kube-api-access-7b65c\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.576646 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef22134-e2b5-45b3-87bb-1b061d3834d2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.576688 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.576704 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.576786 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef22134-e2b5-45b3-87bb-1b061d3834d2-logs\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.577291 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ef22134-e2b5-45b3-87bb-1b061d3834d2-logs\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.577781 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ef22134-e2b5-45b3-87bb-1b061d3834d2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.583169 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.589472 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3ef22134-e2b5-45b3-87bb-1b061d3834d2-ceph\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.589948 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-scripts\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.593613 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-config-data\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.594929 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.599665 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b65c\" (UniqueName: \"kubernetes.io/projected/3ef22134-e2b5-45b3-87bb-1b061d3834d2-kube-api-access-7b65c\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.599802 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ef22134-e2b5-45b3-87bb-1b061d3834d2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.624937 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"3ef22134-e2b5-45b3-87bb-1b061d3834d2\") " pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.700922 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:07:39 crc kubenswrapper[4895]: I0129 17:07:39.990995 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d710098a-e10c-427b-8bdb-bb9cfad0376d","Type":"ContainerStarted","Data":"f0e5d82ade244504a4726fc92b513c2851f917b22cc5789fd3340e41fc89c2a9"} Jan 29 17:07:41 crc kubenswrapper[4895]: I0129 17:07:41.005739 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d710098a-e10c-427b-8bdb-bb9cfad0376d","Type":"ContainerStarted","Data":"72a2927bb6862fad4c29488d95acba7521f55b0c68609054e5e4126fcf53bad1"} Jan 29 17:07:41 crc kubenswrapper[4895]: I0129 17:07:41.050493 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8df53265-0efe-467c-b49c-bf7ed46c998b" path="/var/lib/kubelet/pods/8df53265-0efe-467c-b49c-bf7ed46c998b/volumes" Jan 29 17:07:41 crc kubenswrapper[4895]: I0129 17:07:41.841592 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 29 17:07:41 crc kubenswrapper[4895]: I0129 17:07:41.912667 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 29 17:07:43 crc kubenswrapper[4895]: I0129 17:07:43.037669 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:07:43 crc kubenswrapper[4895]: E0129 17:07:43.037965 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:07:45 crc kubenswrapper[4895]: E0129 17:07:45.274051 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:07:50 crc kubenswrapper[4895]: E0129 17:07:50.812918 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:07:52 crc kubenswrapper[4895]: I0129 17:07:52.227730 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:07:52 crc kubenswrapper[4895]: E0129 17:07:52.309774 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-manila-api:current-podified" Jan 29 17:07:52 crc kubenswrapper[4895]: E0129 17:07:52.309994 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manila-db-sync,Image:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,Command:[/bin/bash],Args:[-c sleep 0 && /usr/bin/manila-manage --config-dir /etc/manila/manila.conf.d db sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:job-config-data,ReadOnly:true,MountPath:/etc/manila/manila.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-np8bg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42429,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42429,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-db-sync-gmsgc_openstack(ae3f812e-d3cf-4cac-b58f-bb93fe0557bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:07:52 crc kubenswrapper[4895]: E0129 17:07:52.312212 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/manila-db-sync-gmsgc" podUID="ae3f812e-d3cf-4cac-b58f-bb93fe0557bd" Jan 29 17:07:52 crc kubenswrapper[4895]: W0129 17:07:52.314135 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ef22134_e2b5_45b3_87bb_1b061d3834d2.slice/crio-979948c8936e1b29d52380ea9a42ef56e006e4a1685e6a7940d1babe73d851ff WatchSource:0}: Error finding container 979948c8936e1b29d52380ea9a42ef56e006e4a1685e6a7940d1babe73d851ff: Status 404 returned error can't find the container with id 979948c8936e1b29d52380ea9a42ef56e006e4a1685e6a7940d1babe73d851ff Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.126693 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bb5cc9d-przr2" event={"ID":"7823fb45-6935-459a-a1c9-7723a2f52136","Type":"ContainerStarted","Data":"12f18b3c1662fbb2a82e6d5e34c222d4ee24502dee37e468ba63925ad0f8262c"} Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.127174 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bb5cc9d-przr2" event={"ID":"7823fb45-6935-459a-a1c9-7723a2f52136","Type":"ContainerStarted","Data":"6005bcbee193cb9280226d74a6a3f26b0407f04ac7b06312a7815796319d0ab4"} Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.134843 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef22134-e2b5-45b3-87bb-1b061d3834d2","Type":"ContainerStarted","Data":"85450542fca24c31408f26723a6f920d744acc2275f4ec4afae7309d5600677e"} Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.134921 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef22134-e2b5-45b3-87bb-1b061d3834d2","Type":"ContainerStarted","Data":"979948c8936e1b29d52380ea9a42ef56e006e4a1685e6a7940d1babe73d851ff"} Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.136614 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d48bc97b-rb85g" event={"ID":"498b6cd8-82b2-47bf-ac98-612780a6a4f9","Type":"ContainerStarted","Data":"337cfaccd82c9afd05fe7fb16f4998cf445980bed998a2fa584001d1e7039045"} Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.136639 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d48bc97b-rb85g" event={"ID":"498b6cd8-82b2-47bf-ac98-612780a6a4f9","Type":"ContainerStarted","Data":"e86774c621aac90c360b4426bd303af8358f8b72c73c8673781b7877bda63cbc"} Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.140163 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d710098a-e10c-427b-8bdb-bb9cfad0376d","Type":"ContainerStarted","Data":"5cf3bc6ae124462075c4117b6bf34e4631cef49c0403ed5fb2554aab31f57997"} Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.143372 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d595c8b5-wgg2f" event={"ID":"cd366a77-4e31-4194-b0fb-2889144e8441","Type":"ContainerStarted","Data":"be1fc858f94efd76cd4d7fecaf284474fa0fc29b939da497fd70516512e0ef24"} Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.143414 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d595c8b5-wgg2f" event={"ID":"cd366a77-4e31-4194-b0fb-2889144e8441","Type":"ContainerStarted","Data":"c94d41eb3011c6449616d3d62b7fa46e7fddc36f60a275325f29700f46c4c1b4"} Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.143520 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9d595c8b5-wgg2f" podUID="cd366a77-4e31-4194-b0fb-2889144e8441" containerName="horizon-log" containerID="cri-o://c94d41eb3011c6449616d3d62b7fa46e7fddc36f60a275325f29700f46c4c1b4" gracePeriod=30 Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.143757 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9d595c8b5-wgg2f" podUID="cd366a77-4e31-4194-b0fb-2889144e8441" containerName="horizon" containerID="cri-o://be1fc858f94efd76cd4d7fecaf284474fa0fc29b939da497fd70516512e0ef24" gracePeriod=30 Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.157428 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84675dc6ff-mgf7b" podUID="6b531318-bcfa-4162-b306-3a293fa21814" containerName="horizon-log" containerID="cri-o://1f9479d7dfd7363ee8f36ccc3eb5c33974188c55b51f3b9d0ef205920dc4f529" gracePeriod=30 Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.157825 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84675dc6ff-mgf7b" podUID="6b531318-bcfa-4162-b306-3a293fa21814" containerName="horizon" containerID="cri-o://28ee92cb98352b3c0e651cf8cff619dac548878e6e0d1c8d7504f5e1696c5b0a" gracePeriod=30 Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.158016 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84675dc6ff-mgf7b" event={"ID":"6b531318-bcfa-4162-b306-3a293fa21814","Type":"ContainerStarted","Data":"28ee92cb98352b3c0e651cf8cff619dac548878e6e0d1c8d7504f5e1696c5b0a"} Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.158104 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84675dc6ff-mgf7b" event={"ID":"6b531318-bcfa-4162-b306-3a293fa21814","Type":"ContainerStarted","Data":"1f9479d7dfd7363ee8f36ccc3eb5c33974188c55b51f3b9d0ef205920dc4f529"} Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.160620 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5bb5cc9d-przr2" podStartSLOduration=3.039632832 podStartE2EDuration="19.160597515s" podCreationTimestamp="2026-01-29 17:07:34 +0000 UTC" firstStartedPulling="2026-01-29 17:07:36.241066131 +0000 UTC m=+3340.044043395" lastFinishedPulling="2026-01-29 17:07:52.362030814 +0000 UTC m=+3356.165008078" observedRunningTime="2026-01-29 17:07:53.146206325 +0000 UTC m=+3356.949183589" watchObservedRunningTime="2026-01-29 17:07:53.160597515 +0000 UTC m=+3356.963574799" Jan 29 17:07:53 crc kubenswrapper[4895]: E0129 17:07:53.165170 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-manila-api:current-podified\\\"\"" pod="openstack/manila-db-sync-gmsgc" podUID="ae3f812e-d3cf-4cac-b58f-bb93fe0557bd" Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.183639 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9d595c8b5-wgg2f" podStartSLOduration=3.024950278 podStartE2EDuration="22.183613239s" podCreationTimestamp="2026-01-29 17:07:31 +0000 UTC" firstStartedPulling="2026-01-29 17:07:33.179110274 +0000 UTC m=+3336.982087538" lastFinishedPulling="2026-01-29 17:07:52.337773235 +0000 UTC m=+3356.140750499" observedRunningTime="2026-01-29 17:07:53.170341659 +0000 UTC m=+3356.973318943" watchObservedRunningTime="2026-01-29 17:07:53.183613239 +0000 UTC m=+3356.986590513" Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.206243 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.206216342 podStartE2EDuration="15.206216342s" podCreationTimestamp="2026-01-29 17:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:53.194342981 +0000 UTC m=+3356.997320245" watchObservedRunningTime="2026-01-29 17:07:53.206216342 +0000 UTC m=+3357.009193606" Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.227156 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66d48bc97b-rb85g" podStartSLOduration=3.278519823 podStartE2EDuration="19.225185997s" podCreationTimestamp="2026-01-29 17:07:34 +0000 UTC" firstStartedPulling="2026-01-29 17:07:36.416926102 +0000 UTC m=+3340.219903366" lastFinishedPulling="2026-01-29 17:07:52.363592276 +0000 UTC m=+3356.166569540" observedRunningTime="2026-01-29 17:07:53.22012615 +0000 UTC m=+3357.023103424" watchObservedRunningTime="2026-01-29 17:07:53.225185997 +0000 UTC m=+3357.028163271" Jan 29 17:07:53 crc kubenswrapper[4895]: I0129 17:07:53.280647 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-84675dc6ff-mgf7b" podStartSLOduration=3.014633969 podStartE2EDuration="22.280625171s" podCreationTimestamp="2026-01-29 17:07:31 +0000 UTC" firstStartedPulling="2026-01-29 17:07:33.066373226 +0000 UTC m=+3336.869350490" lastFinishedPulling="2026-01-29 17:07:52.332364428 +0000 UTC m=+3356.135341692" observedRunningTime="2026-01-29 17:07:53.27361001 +0000 UTC m=+3357.076587284" watchObservedRunningTime="2026-01-29 17:07:53.280625171 +0000 UTC m=+3357.083602435" Jan 29 17:07:54 crc kubenswrapper[4895]: E0129 17:07:54.038575 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:07:54 crc kubenswrapper[4895]: I0129 17:07:54.168178 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3ef22134-e2b5-45b3-87bb-1b061d3834d2","Type":"ContainerStarted","Data":"220c89f9d6703c3c76ddcc9151877f12871d3e199f8cf38550f6289acb5f5655"} Jan 29 17:07:54 crc kubenswrapper[4895]: I0129 17:07:54.187735 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=15.187720197 podStartE2EDuration="15.187720197s" podCreationTimestamp="2026-01-29 17:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:07:54.187686566 +0000 UTC m=+3357.990663830" watchObservedRunningTime="2026-01-29 17:07:54.187720197 +0000 UTC m=+3357.990697461" Jan 29 17:07:55 crc kubenswrapper[4895]: I0129 17:07:55.037274 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:07:55 crc kubenswrapper[4895]: E0129 17:07:55.038304 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:07:55 crc kubenswrapper[4895]: I0129 17:07:55.510785 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:55 crc kubenswrapper[4895]: I0129 17:07:55.510831 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:07:55 crc kubenswrapper[4895]: I0129 17:07:55.569763 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:55 crc kubenswrapper[4895]: I0129 17:07:55.569813 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:07:58 crc kubenswrapper[4895]: E0129 17:07:58.039100 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:07:58 crc kubenswrapper[4895]: I0129 17:07:58.520726 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:58 crc kubenswrapper[4895]: I0129 17:07:58.521206 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:58 crc kubenswrapper[4895]: I0129 17:07:58.558078 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:58 crc kubenswrapper[4895]: I0129 17:07:58.567632 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:59 crc kubenswrapper[4895]: I0129 17:07:59.230701 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:59 crc kubenswrapper[4895]: I0129 17:07:59.230745 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 17:07:59 crc kubenswrapper[4895]: I0129 17:07:59.702607 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 17:07:59 crc kubenswrapper[4895]: I0129 17:07:59.702685 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 17:07:59 crc kubenswrapper[4895]: I0129 17:07:59.747075 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 17:07:59 crc kubenswrapper[4895]: I0129 17:07:59.765460 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 17:08:00 crc kubenswrapper[4895]: I0129 17:08:00.238163 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 17:08:00 crc kubenswrapper[4895]: I0129 17:08:00.239705 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 17:08:02 crc kubenswrapper[4895]: I0129 17:08:02.256565 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 17:08:02 crc kubenswrapper[4895]: I0129 17:08:02.256907 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 17:08:02 crc kubenswrapper[4895]: I0129 17:08:02.354900 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:08:02 crc kubenswrapper[4895]: I0129 17:08:02.410200 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:08:03 crc kubenswrapper[4895]: I0129 17:08:03.177162 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 17:08:03 crc kubenswrapper[4895]: I0129 17:08:03.179009 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:03 crc kubenswrapper[4895]: I0129 17:08:03.179150 4895 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 17:08:03 crc kubenswrapper[4895]: I0129 17:08:03.184527 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 17:08:03 crc kubenswrapper[4895]: I0129 17:08:03.186134 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 17:08:05 crc kubenswrapper[4895]: E0129 17:08:05.039262 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:08:05 crc kubenswrapper[4895]: I0129 17:08:05.513764 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bb5cc9d-przr2" podUID="7823fb45-6935-459a-a1c9-7723a2f52136" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.248:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.248:8443: connect: connection refused" Jan 29 17:08:05 crc kubenswrapper[4895]: I0129 17:08:05.573217 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66d48bc97b-rb85g" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 29 17:08:06 crc kubenswrapper[4895]: E0129 17:08:06.038781 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:08:08 crc kubenswrapper[4895]: I0129 17:08:08.320400 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-gmsgc" event={"ID":"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd","Type":"ContainerStarted","Data":"1bb39dc9487955ca75523c7b2297397729282287ce5337d94d812fd137077d8b"} Jan 29 17:08:08 crc kubenswrapper[4895]: I0129 17:08:08.348199 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-gmsgc" podStartSLOduration=2.567805121 podStartE2EDuration="31.348178109s" podCreationTimestamp="2026-01-29 17:07:37 +0000 UTC" firstStartedPulling="2026-01-29 17:07:38.501806746 +0000 UTC m=+3342.304784010" lastFinishedPulling="2026-01-29 17:08:07.282179734 +0000 UTC m=+3371.085156998" observedRunningTime="2026-01-29 17:08:08.340372428 +0000 UTC m=+3372.143349712" watchObservedRunningTime="2026-01-29 17:08:08.348178109 +0000 UTC m=+3372.151155373" Jan 29 17:08:09 crc kubenswrapper[4895]: E0129 17:08:09.038792 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:08:10 crc kubenswrapper[4895]: I0129 17:08:10.037543 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:08:10 crc kubenswrapper[4895]: E0129 17:08:10.037885 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:08:17 crc kubenswrapper[4895]: E0129 17:08:17.045127 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:08:17 crc kubenswrapper[4895]: I0129 17:08:17.492644 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:08:17 crc kubenswrapper[4895]: I0129 17:08:17.592463 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:08:19 crc kubenswrapper[4895]: E0129 17:08:19.039408 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:08:19 crc kubenswrapper[4895]: I0129 17:08:19.449750 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5bb5cc9d-przr2" Jan 29 17:08:19 crc kubenswrapper[4895]: I0129 17:08:19.527577 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66d48bc97b-rb85g"] Jan 29 17:08:19 crc kubenswrapper[4895]: I0129 17:08:19.527854 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66d48bc97b-rb85g" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon-log" containerID="cri-o://e86774c621aac90c360b4426bd303af8358f8b72c73c8673781b7877bda63cbc" gracePeriod=30 Jan 29 17:08:19 crc kubenswrapper[4895]: I0129 17:08:19.528043 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66d48bc97b-rb85g" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon" containerID="cri-o://337cfaccd82c9afd05fe7fb16f4998cf445980bed998a2fa584001d1e7039045" gracePeriod=30 Jan 29 17:08:19 crc kubenswrapper[4895]: I0129 17:08:19.562168 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66d48bc97b-rb85g" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 29 17:08:22 crc kubenswrapper[4895]: I0129 17:08:22.037522 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:08:22 crc kubenswrapper[4895]: E0129 17:08:22.038163 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:08:22 crc kubenswrapper[4895]: I0129 17:08:22.943305 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66d48bc97b-rb85g" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:40468->10.217.0.247:8443: read: connection reset by peer" Jan 29 17:08:23 crc kubenswrapper[4895]: E0129 17:08:23.078881 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.449382 4895 generic.go:334] "Generic (PLEG): container finished" podID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerID="337cfaccd82c9afd05fe7fb16f4998cf445980bed998a2fa584001d1e7039045" exitCode=0 Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.449484 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d48bc97b-rb85g" event={"ID":"498b6cd8-82b2-47bf-ac98-612780a6a4f9","Type":"ContainerDied","Data":"337cfaccd82c9afd05fe7fb16f4998cf445980bed998a2fa584001d1e7039045"} Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.454079 4895 generic.go:334] "Generic (PLEG): container finished" podID="cd366a77-4e31-4194-b0fb-2889144e8441" containerID="be1fc858f94efd76cd4d7fecaf284474fa0fc29b939da497fd70516512e0ef24" exitCode=137 Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.454126 4895 generic.go:334] "Generic (PLEG): container finished" podID="cd366a77-4e31-4194-b0fb-2889144e8441" containerID="c94d41eb3011c6449616d3d62b7fa46e7fddc36f60a275325f29700f46c4c1b4" exitCode=137 Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.454213 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d595c8b5-wgg2f" event={"ID":"cd366a77-4e31-4194-b0fb-2889144e8441","Type":"ContainerDied","Data":"be1fc858f94efd76cd4d7fecaf284474fa0fc29b939da497fd70516512e0ef24"} Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.454246 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d595c8b5-wgg2f" event={"ID":"cd366a77-4e31-4194-b0fb-2889144e8441","Type":"ContainerDied","Data":"c94d41eb3011c6449616d3d62b7fa46e7fddc36f60a275325f29700f46c4c1b4"} Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.456157 4895 generic.go:334] "Generic (PLEG): container finished" podID="6b531318-bcfa-4162-b306-3a293fa21814" containerID="28ee92cb98352b3c0e651cf8cff619dac548878e6e0d1c8d7504f5e1696c5b0a" exitCode=137 Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.456192 4895 generic.go:334] "Generic (PLEG): container finished" podID="6b531318-bcfa-4162-b306-3a293fa21814" containerID="1f9479d7dfd7363ee8f36ccc3eb5c33974188c55b51f3b9d0ef205920dc4f529" exitCode=137 Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.456216 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84675dc6ff-mgf7b" event={"ID":"6b531318-bcfa-4162-b306-3a293fa21814","Type":"ContainerDied","Data":"28ee92cb98352b3c0e651cf8cff619dac548878e6e0d1c8d7504f5e1696c5b0a"} Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.456239 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84675dc6ff-mgf7b" event={"ID":"6b531318-bcfa-4162-b306-3a293fa21814","Type":"ContainerDied","Data":"1f9479d7dfd7363ee8f36ccc3eb5c33974188c55b51f3b9d0ef205920dc4f529"} Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.635724 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.644349 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.686315 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b531318-bcfa-4162-b306-3a293fa21814-logs\") pod \"6b531318-bcfa-4162-b306-3a293fa21814\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.686385 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chd45\" (UniqueName: \"kubernetes.io/projected/6b531318-bcfa-4162-b306-3a293fa21814-kube-api-access-chd45\") pod \"6b531318-bcfa-4162-b306-3a293fa21814\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.686412 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-scripts\") pod \"6b531318-bcfa-4162-b306-3a293fa21814\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.686481 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-config-data\") pod \"cd366a77-4e31-4194-b0fb-2889144e8441\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.686555 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd366a77-4e31-4194-b0fb-2889144e8441-logs\") pod \"cd366a77-4e31-4194-b0fb-2889144e8441\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.686615 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gklgk\" (UniqueName: \"kubernetes.io/projected/cd366a77-4e31-4194-b0fb-2889144e8441-kube-api-access-gklgk\") pod \"cd366a77-4e31-4194-b0fb-2889144e8441\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.686651 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b531318-bcfa-4162-b306-3a293fa21814-horizon-secret-key\") pod \"6b531318-bcfa-4162-b306-3a293fa21814\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.686677 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-scripts\") pod \"cd366a77-4e31-4194-b0fb-2889144e8441\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.686717 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cd366a77-4e31-4194-b0fb-2889144e8441-horizon-secret-key\") pod \"cd366a77-4e31-4194-b0fb-2889144e8441\" (UID: \"cd366a77-4e31-4194-b0fb-2889144e8441\") " Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.686734 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-config-data\") pod \"6b531318-bcfa-4162-b306-3a293fa21814\" (UID: \"6b531318-bcfa-4162-b306-3a293fa21814\") " Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.686895 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b531318-bcfa-4162-b306-3a293fa21814-logs" (OuterVolumeSpecName: "logs") pod "6b531318-bcfa-4162-b306-3a293fa21814" (UID: "6b531318-bcfa-4162-b306-3a293fa21814"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.687172 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd366a77-4e31-4194-b0fb-2889144e8441-logs" (OuterVolumeSpecName: "logs") pod "cd366a77-4e31-4194-b0fb-2889144e8441" (UID: "cd366a77-4e31-4194-b0fb-2889144e8441"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.687263 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b531318-bcfa-4162-b306-3a293fa21814-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.687280 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd366a77-4e31-4194-b0fb-2889144e8441-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.695628 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd366a77-4e31-4194-b0fb-2889144e8441-kube-api-access-gklgk" (OuterVolumeSpecName: "kube-api-access-gklgk") pod "cd366a77-4e31-4194-b0fb-2889144e8441" (UID: "cd366a77-4e31-4194-b0fb-2889144e8441"). InnerVolumeSpecName "kube-api-access-gklgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.695755 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd366a77-4e31-4194-b0fb-2889144e8441-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cd366a77-4e31-4194-b0fb-2889144e8441" (UID: "cd366a77-4e31-4194-b0fb-2889144e8441"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.699070 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b531318-bcfa-4162-b306-3a293fa21814-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6b531318-bcfa-4162-b306-3a293fa21814" (UID: "6b531318-bcfa-4162-b306-3a293fa21814"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.703638 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b531318-bcfa-4162-b306-3a293fa21814-kube-api-access-chd45" (OuterVolumeSpecName: "kube-api-access-chd45") pod "6b531318-bcfa-4162-b306-3a293fa21814" (UID: "6b531318-bcfa-4162-b306-3a293fa21814"). InnerVolumeSpecName "kube-api-access-chd45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.718464 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-config-data" (OuterVolumeSpecName: "config-data") pod "cd366a77-4e31-4194-b0fb-2889144e8441" (UID: "cd366a77-4e31-4194-b0fb-2889144e8441"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.721112 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-scripts" (OuterVolumeSpecName: "scripts") pod "6b531318-bcfa-4162-b306-3a293fa21814" (UID: "6b531318-bcfa-4162-b306-3a293fa21814"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.721664 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-config-data" (OuterVolumeSpecName: "config-data") pod "6b531318-bcfa-4162-b306-3a293fa21814" (UID: "6b531318-bcfa-4162-b306-3a293fa21814"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.725555 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-scripts" (OuterVolumeSpecName: "scripts") pod "cd366a77-4e31-4194-b0fb-2889144e8441" (UID: "cd366a77-4e31-4194-b0fb-2889144e8441"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.789138 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chd45\" (UniqueName: \"kubernetes.io/projected/6b531318-bcfa-4162-b306-3a293fa21814-kube-api-access-chd45\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.789172 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.789183 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.789192 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gklgk\" (UniqueName: \"kubernetes.io/projected/cd366a77-4e31-4194-b0fb-2889144e8441-kube-api-access-gklgk\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.789202 4895 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b531318-bcfa-4162-b306-3a293fa21814-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.789215 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd366a77-4e31-4194-b0fb-2889144e8441-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.789224 4895 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cd366a77-4e31-4194-b0fb-2889144e8441-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:23 crc kubenswrapper[4895]: I0129 17:08:23.789232 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b531318-bcfa-4162-b306-3a293fa21814-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.473415 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d595c8b5-wgg2f" event={"ID":"cd366a77-4e31-4194-b0fb-2889144e8441","Type":"ContainerDied","Data":"e550434befa2dd0229ffdadcba1c4b567e81794d0de2689adbbabe6d48aacc18"} Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.473467 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d595c8b5-wgg2f" Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.473479 4895 scope.go:117] "RemoveContainer" containerID="be1fc858f94efd76cd4d7fecaf284474fa0fc29b939da497fd70516512e0ef24" Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.477391 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84675dc6ff-mgf7b" event={"ID":"6b531318-bcfa-4162-b306-3a293fa21814","Type":"ContainerDied","Data":"a521c82d95ceeae518f806d19c820d0526e69de1b39752409f3cc10e619cf6b2"} Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.477483 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84675dc6ff-mgf7b" Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.519099 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9d595c8b5-wgg2f"] Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.528863 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9d595c8b5-wgg2f"] Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.538271 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84675dc6ff-mgf7b"] Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.547496 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84675dc6ff-mgf7b"] Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.647534 4895 scope.go:117] "RemoveContainer" containerID="c94d41eb3011c6449616d3d62b7fa46e7fddc36f60a275325f29700f46c4c1b4" Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.666345 4895 scope.go:117] "RemoveContainer" containerID="28ee92cb98352b3c0e651cf8cff619dac548878e6e0d1c8d7504f5e1696c5b0a" Jan 29 17:08:24 crc kubenswrapper[4895]: I0129 17:08:24.824037 4895 scope.go:117] "RemoveContainer" containerID="1f9479d7dfd7363ee8f36ccc3eb5c33974188c55b51f3b9d0ef205920dc4f529" Jan 29 17:08:25 crc kubenswrapper[4895]: I0129 17:08:25.050615 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b531318-bcfa-4162-b306-3a293fa21814" path="/var/lib/kubelet/pods/6b531318-bcfa-4162-b306-3a293fa21814/volumes" Jan 29 17:08:25 crc kubenswrapper[4895]: I0129 17:08:25.051541 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd366a77-4e31-4194-b0fb-2889144e8441" path="/var/lib/kubelet/pods/cd366a77-4e31-4194-b0fb-2889144e8441/volumes" Jan 29 17:08:25 crc kubenswrapper[4895]: I0129 17:08:25.570733 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66d48bc97b-rb85g" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 29 17:08:31 crc kubenswrapper[4895]: E0129 17:08:31.040371 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:08:31 crc kubenswrapper[4895]: E0129 17:08:31.041907 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:08:34 crc kubenswrapper[4895]: E0129 17:08:34.039602 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:08:35 crc kubenswrapper[4895]: I0129 17:08:35.571278 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66d48bc97b-rb85g" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 29 17:08:37 crc kubenswrapper[4895]: I0129 17:08:37.043004 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:08:37 crc kubenswrapper[4895]: E0129 17:08:37.043270 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:08:42 crc kubenswrapper[4895]: E0129 17:08:42.039736 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:08:45 crc kubenswrapper[4895]: I0129 17:08:45.570596 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66d48bc97b-rb85g" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Jan 29 17:08:46 crc kubenswrapper[4895]: E0129 17:08:46.039490 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:08:46 crc kubenswrapper[4895]: E0129 17:08:46.039884 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:08:47 crc kubenswrapper[4895]: I0129 17:08:47.731242 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae3f812e-d3cf-4cac-b58f-bb93fe0557bd" containerID="1bb39dc9487955ca75523c7b2297397729282287ce5337d94d812fd137077d8b" exitCode=0 Jan 29 17:08:47 crc kubenswrapper[4895]: I0129 17:08:47.731353 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-gmsgc" event={"ID":"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd","Type":"ContainerDied","Data":"1bb39dc9487955ca75523c7b2297397729282287ce5337d94d812fd137077d8b"} Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.139131 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-gmsgc" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.196998 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np8bg\" (UniqueName: \"kubernetes.io/projected/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-kube-api-access-np8bg\") pod \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.197089 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-config-data\") pod \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.197127 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-job-config-data\") pod \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.197176 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-combined-ca-bundle\") pod \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\" (UID: \"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd\") " Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.205829 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-kube-api-access-np8bg" (OuterVolumeSpecName: "kube-api-access-np8bg") pod "ae3f812e-d3cf-4cac-b58f-bb93fe0557bd" (UID: "ae3f812e-d3cf-4cac-b58f-bb93fe0557bd"). InnerVolumeSpecName "kube-api-access-np8bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.206290 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "ae3f812e-d3cf-4cac-b58f-bb93fe0557bd" (UID: "ae3f812e-d3cf-4cac-b58f-bb93fe0557bd"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.209075 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-config-data" (OuterVolumeSpecName: "config-data") pod "ae3f812e-d3cf-4cac-b58f-bb93fe0557bd" (UID: "ae3f812e-d3cf-4cac-b58f-bb93fe0557bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.231011 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae3f812e-d3cf-4cac-b58f-bb93fe0557bd" (UID: "ae3f812e-d3cf-4cac-b58f-bb93fe0557bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.300343 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.300394 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np8bg\" (UniqueName: \"kubernetes.io/projected/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-kube-api-access-np8bg\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.300405 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.300414 4895 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.754272 4895 generic.go:334] "Generic (PLEG): container finished" podID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerID="e86774c621aac90c360b4426bd303af8358f8b72c73c8673781b7877bda63cbc" exitCode=137 Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.754344 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d48bc97b-rb85g" event={"ID":"498b6cd8-82b2-47bf-ac98-612780a6a4f9","Type":"ContainerDied","Data":"e86774c621aac90c360b4426bd303af8358f8b72c73c8673781b7877bda63cbc"} Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.762751 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-gmsgc" event={"ID":"ae3f812e-d3cf-4cac-b58f-bb93fe0557bd","Type":"ContainerDied","Data":"beb8f28abb6938dfb56f13ec3fa44e3c37effb94bb9ff7cc0443e55c7f2a8fef"} Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.762792 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb8f28abb6938dfb56f13ec3fa44e3c37effb94bb9ff7cc0443e55c7f2a8fef" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.762790 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-gmsgc" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.851079 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.914265 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcxwc\" (UniqueName: \"kubernetes.io/projected/498b6cd8-82b2-47bf-ac98-612780a6a4f9-kube-api-access-fcxwc\") pod \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.914365 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-config-data\") pod \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.914419 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498b6cd8-82b2-47bf-ac98-612780a6a4f9-logs\") pod \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.914587 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-secret-key\") pod \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.914780 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-scripts\") pod \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.914829 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-combined-ca-bundle\") pod \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.914994 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-tls-certs\") pod \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\" (UID: \"498b6cd8-82b2-47bf-ac98-612780a6a4f9\") " Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.915111 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498b6cd8-82b2-47bf-ac98-612780a6a4f9-logs" (OuterVolumeSpecName: "logs") pod "498b6cd8-82b2-47bf-ac98-612780a6a4f9" (UID: "498b6cd8-82b2-47bf-ac98-612780a6a4f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.915534 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498b6cd8-82b2-47bf-ac98-612780a6a4f9-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.922864 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "498b6cd8-82b2-47bf-ac98-612780a6a4f9" (UID: "498b6cd8-82b2-47bf-ac98-612780a6a4f9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.923109 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498b6cd8-82b2-47bf-ac98-612780a6a4f9-kube-api-access-fcxwc" (OuterVolumeSpecName: "kube-api-access-fcxwc") pod "498b6cd8-82b2-47bf-ac98-612780a6a4f9" (UID: "498b6cd8-82b2-47bf-ac98-612780a6a4f9"). InnerVolumeSpecName "kube-api-access-fcxwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.949303 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "498b6cd8-82b2-47bf-ac98-612780a6a4f9" (UID: "498b6cd8-82b2-47bf-ac98-612780a6a4f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.950514 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-config-data" (OuterVolumeSpecName: "config-data") pod "498b6cd8-82b2-47bf-ac98-612780a6a4f9" (UID: "498b6cd8-82b2-47bf-ac98-612780a6a4f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.964766 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-scripts" (OuterVolumeSpecName: "scripts") pod "498b6cd8-82b2-47bf-ac98-612780a6a4f9" (UID: "498b6cd8-82b2-47bf-ac98-612780a6a4f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:08:49 crc kubenswrapper[4895]: I0129 17:08:49.981789 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "498b6cd8-82b2-47bf-ac98-612780a6a4f9" (UID: "498b6cd8-82b2-47bf-ac98-612780a6a4f9"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:49 crc kubenswrapper[4895]: E0129 17:08:49.991554 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae3f812e_d3cf_4cac_b58f_bb93fe0557bd.slice/crio-beb8f28abb6938dfb56f13ec3fa44e3c37effb94bb9ff7cc0443e55c7f2a8fef\": RecentStats: unable to find data in memory cache]" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.016672 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 17:08:50 crc kubenswrapper[4895]: E0129 17:08:50.017089 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b531318-bcfa-4162-b306-3a293fa21814" containerName="horizon" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017116 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b531318-bcfa-4162-b306-3a293fa21814" containerName="horizon" Jan 29 17:08:50 crc kubenswrapper[4895]: E0129 17:08:50.017131 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b531318-bcfa-4162-b306-3a293fa21814" containerName="horizon-log" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017137 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b531318-bcfa-4162-b306-3a293fa21814" containerName="horizon-log" Jan 29 17:08:50 crc kubenswrapper[4895]: E0129 17:08:50.017154 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd366a77-4e31-4194-b0fb-2889144e8441" containerName="horizon" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017161 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd366a77-4e31-4194-b0fb-2889144e8441" containerName="horizon" Jan 29 17:08:50 crc kubenswrapper[4895]: E0129 17:08:50.017172 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3f812e-d3cf-4cac-b58f-bb93fe0557bd" containerName="manila-db-sync" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017178 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3f812e-d3cf-4cac-b58f-bb93fe0557bd" containerName="manila-db-sync" Jan 29 17:08:50 crc kubenswrapper[4895]: E0129 17:08:50.017188 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon-log" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017194 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon-log" Jan 29 17:08:50 crc kubenswrapper[4895]: E0129 17:08:50.017208 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd366a77-4e31-4194-b0fb-2889144e8441" containerName="horizon-log" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017213 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd366a77-4e31-4194-b0fb-2889144e8441" containerName="horizon-log" Jan 29 17:08:50 crc kubenswrapper[4895]: E0129 17:08:50.017221 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017227 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017274 4895 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017306 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017317 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017326 4895 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/498b6cd8-82b2-47bf-ac98-612780a6a4f9-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017336 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcxwc\" (UniqueName: \"kubernetes.io/projected/498b6cd8-82b2-47bf-ac98-612780a6a4f9-kube-api-access-fcxwc\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017346 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498b6cd8-82b2-47bf-ac98-612780a6a4f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017402 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd366a77-4e31-4194-b0fb-2889144e8441" containerName="horizon-log" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017414 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017432 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd366a77-4e31-4194-b0fb-2889144e8441" containerName="horizon" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017445 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b531318-bcfa-4162-b306-3a293fa21814" containerName="horizon-log" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017457 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" containerName="horizon-log" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017511 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3f812e-d3cf-4cac-b58f-bb93fe0557bd" containerName="manila-db-sync" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.017523 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b531318-bcfa-4162-b306-3a293fa21814" containerName="horizon" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.018541 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.021152 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.022421 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.022608 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-t4vr6" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.022632 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.040304 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:08:50 crc kubenswrapper[4895]: E0129 17:08:50.040751 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.059368 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.102272 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.104281 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.109553 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.119456 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.119530 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-ceph\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.119592 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-scripts\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.119727 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6629\" (UniqueName: \"kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-kube-api-access-k6629\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.119884 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.120483 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.126015 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.126195 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.126367 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-scripts\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.126440 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.126491 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.126540 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjkfj\" (UniqueName: \"kubernetes.io/projected/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-kube-api-access-mjkfj\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.126619 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.126668 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.126773 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.188748 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-9bxf2"] Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.191020 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.215109 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-9bxf2"] Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229514 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229591 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-scripts\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229622 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229642 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229682 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhvx\" (UniqueName: \"kubernetes.io/projected/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-kube-api-access-2lhvx\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229702 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229718 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjkfj\" (UniqueName: \"kubernetes.io/projected/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-kube-api-access-mjkfj\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229769 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229786 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229840 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229886 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229907 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229933 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-ceph\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.229967 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.230001 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-scripts\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.230064 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6629\" (UniqueName: \"kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-kube-api-access-k6629\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.230126 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.230147 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-config\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.230192 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.230238 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.236353 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.236514 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.237948 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.238858 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-scripts\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.251456 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.252120 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.258461 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-ceph\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.258574 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.259248 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.262076 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-scripts\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.262604 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.264321 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.266531 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjkfj\" (UniqueName: \"kubernetes.io/projected/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-kube-api-access-mjkfj\") pod \"manila-scheduler-0\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.268036 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6629\" (UniqueName: \"kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-kube-api-access-k6629\") pod \"manila-share-share1-0\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.332982 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lhvx\" (UniqueName: \"kubernetes.io/projected/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-kube-api-access-2lhvx\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.333034 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.333090 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.333117 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.333192 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-config\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.333246 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.334037 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.334777 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.335297 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.335797 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.335857 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-config\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.376837 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lhvx\" (UniqueName: \"kubernetes.io/projected/892c18fa-4c09-46ac-aa0e-42f0466f4b5c-kube-api-access-2lhvx\") pod \"dnsmasq-dns-69655fd4bf-9bxf2\" (UID: \"892c18fa-4c09-46ac-aa0e-42f0466f4b5c\") " pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.417070 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.418690 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.427823 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.432113 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.433108 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.437623 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.520474 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.638320 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae994946-ceb2-4a89-af8d-ac902ea62c90-etc-machine-id\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.639027 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.639094 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data-custom\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.639147 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.639230 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae994946-ceb2-4a89-af8d-ac902ea62c90-logs\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.639412 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-scripts\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.639472 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjbvn\" (UniqueName: \"kubernetes.io/projected/ae994946-ceb2-4a89-af8d-ac902ea62c90-kube-api-access-hjbvn\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.740822 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-scripts\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.740918 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjbvn\" (UniqueName: \"kubernetes.io/projected/ae994946-ceb2-4a89-af8d-ac902ea62c90-kube-api-access-hjbvn\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.741003 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae994946-ceb2-4a89-af8d-ac902ea62c90-etc-machine-id\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.741048 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.741075 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data-custom\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.741096 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.741151 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae994946-ceb2-4a89-af8d-ac902ea62c90-logs\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.742110 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae994946-ceb2-4a89-af8d-ac902ea62c90-logs\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.742901 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae994946-ceb2-4a89-af8d-ac902ea62c90-etc-machine-id\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.748793 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.749971 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.758472 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-scripts\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.758552 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data-custom\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.766310 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjbvn\" (UniqueName: \"kubernetes.io/projected/ae994946-ceb2-4a89-af8d-ac902ea62c90-kube-api-access-hjbvn\") pod \"manila-api-0\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.775086 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d48bc97b-rb85g" event={"ID":"498b6cd8-82b2-47bf-ac98-612780a6a4f9","Type":"ContainerDied","Data":"608d48ae23c2e4a3b64a00b0eeede284c472002d9b6cdda0a11d93681bfef26b"} Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.775167 4895 scope.go:117] "RemoveContainer" containerID="337cfaccd82c9afd05fe7fb16f4998cf445980bed998a2fa584001d1e7039045" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.775440 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d48bc97b-rb85g" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.808475 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.855020 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66d48bc97b-rb85g"] Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.877797 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66d48bc97b-rb85g"] Jan 29 17:08:50 crc kubenswrapper[4895]: I0129 17:08:50.975320 4895 scope.go:117] "RemoveContainer" containerID="e86774c621aac90c360b4426bd303af8358f8b72c73c8673781b7877bda63cbc" Jan 29 17:08:51 crc kubenswrapper[4895]: I0129 17:08:51.050830 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498b6cd8-82b2-47bf-ac98-612780a6a4f9" path="/var/lib/kubelet/pods/498b6cd8-82b2-47bf-ac98-612780a6a4f9/volumes" Jan 29 17:08:51 crc kubenswrapper[4895]: I0129 17:08:51.189487 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-9bxf2"] Jan 29 17:08:51 crc kubenswrapper[4895]: I0129 17:08:51.202007 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 17:08:51 crc kubenswrapper[4895]: I0129 17:08:51.386999 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 17:08:51 crc kubenswrapper[4895]: W0129 17:08:51.584204 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae994946_ceb2_4a89_af8d_ac902ea62c90.slice/crio-75097466a9e32015f88b9fbbd6f3ed28a72f07dfabaf161a5cb64ade1bab1049 WatchSource:0}: Error finding container 75097466a9e32015f88b9fbbd6f3ed28a72f07dfabaf161a5cb64ade1bab1049: Status 404 returned error can't find the container with id 75097466a9e32015f88b9fbbd6f3ed28a72f07dfabaf161a5cb64ade1bab1049 Jan 29 17:08:51 crc kubenswrapper[4895]: I0129 17:08:51.585639 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 29 17:08:51 crc kubenswrapper[4895]: I0129 17:08:51.825389 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9","Type":"ContainerStarted","Data":"2c85b848ce61180bab3d6ec4ae63acd0cee29af34d2717b90f8bf4334fa6c8d8"} Jan 29 17:08:51 crc kubenswrapper[4895]: I0129 17:08:51.829001 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" event={"ID":"892c18fa-4c09-46ac-aa0e-42f0466f4b5c","Type":"ContainerStarted","Data":"b9acadc44729e8281a051e815bd850891a7cf7bc3c4aaa7a71eb0112753d994a"} Jan 29 17:08:51 crc kubenswrapper[4895]: I0129 17:08:51.829055 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" event={"ID":"892c18fa-4c09-46ac-aa0e-42f0466f4b5c","Type":"ContainerStarted","Data":"e41d1d16196654cf7670f019eae8c63ded425820316d7c76f6f81164097ba450"} Jan 29 17:08:51 crc kubenswrapper[4895]: I0129 17:08:51.832685 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5","Type":"ContainerStarted","Data":"5b31c673a720d5283137a7691c852b9ed4ee7f30c7ff9e3f5bc779acfca01d95"} Jan 29 17:08:51 crc kubenswrapper[4895]: I0129 17:08:51.837817 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ae994946-ceb2-4a89-af8d-ac902ea62c90","Type":"ContainerStarted","Data":"75097466a9e32015f88b9fbbd6f3ed28a72f07dfabaf161a5cb64ade1bab1049"} Jan 29 17:08:52 crc kubenswrapper[4895]: I0129 17:08:52.843518 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 29 17:08:52 crc kubenswrapper[4895]: I0129 17:08:52.857798 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ae994946-ceb2-4a89-af8d-ac902ea62c90","Type":"ContainerStarted","Data":"31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db"} Jan 29 17:08:52 crc kubenswrapper[4895]: I0129 17:08:52.859821 4895 generic.go:334] "Generic (PLEG): container finished" podID="892c18fa-4c09-46ac-aa0e-42f0466f4b5c" containerID="b9acadc44729e8281a051e815bd850891a7cf7bc3c4aaa7a71eb0112753d994a" exitCode=0 Jan 29 17:08:52 crc kubenswrapper[4895]: I0129 17:08:52.859849 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" event={"ID":"892c18fa-4c09-46ac-aa0e-42f0466f4b5c","Type":"ContainerDied","Data":"b9acadc44729e8281a051e815bd850891a7cf7bc3c4aaa7a71eb0112753d994a"} Jan 29 17:08:53 crc kubenswrapper[4895]: I0129 17:08:53.873507 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5","Type":"ContainerStarted","Data":"6f6cf4844b42c6b02157e07d0d2bbcccce0d103277242bf328a5e9223884d4a8"} Jan 29 17:08:53 crc kubenswrapper[4895]: I0129 17:08:53.888429 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ae994946-ceb2-4a89-af8d-ac902ea62c90","Type":"ContainerStarted","Data":"6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975"} Jan 29 17:08:53 crc kubenswrapper[4895]: I0129 17:08:53.888558 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 29 17:08:53 crc kubenswrapper[4895]: I0129 17:08:53.888562 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="ae994946-ceb2-4a89-af8d-ac902ea62c90" containerName="manila-api-log" containerID="cri-o://31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db" gracePeriod=30 Jan 29 17:08:53 crc kubenswrapper[4895]: I0129 17:08:53.888690 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="ae994946-ceb2-4a89-af8d-ac902ea62c90" containerName="manila-api" containerID="cri-o://6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975" gracePeriod=30 Jan 29 17:08:53 crc kubenswrapper[4895]: I0129 17:08:53.895689 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" event={"ID":"892c18fa-4c09-46ac-aa0e-42f0466f4b5c","Type":"ContainerStarted","Data":"a89acfe6fdedfb1a2dd3af7e270460775b6d8e49be34b2a2664b76a9f90daf00"} Jan 29 17:08:53 crc kubenswrapper[4895]: I0129 17:08:53.896218 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:08:53 crc kubenswrapper[4895]: I0129 17:08:53.916646 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.916629366 podStartE2EDuration="3.916629366s" podCreationTimestamp="2026-01-29 17:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:53.913558433 +0000 UTC m=+3417.716535697" watchObservedRunningTime="2026-01-29 17:08:53.916629366 +0000 UTC m=+3417.719606630" Jan 29 17:08:53 crc kubenswrapper[4895]: I0129 17:08:53.950692 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" podStartSLOduration=3.9506612690000003 podStartE2EDuration="3.950661269s" podCreationTimestamp="2026-01-29 17:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:08:53.937598714 +0000 UTC m=+3417.740575978" watchObservedRunningTime="2026-01-29 17:08:53.950661269 +0000 UTC m=+3417.753638553" Jan 29 17:08:54 crc kubenswrapper[4895]: E0129 17:08:54.040098 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.604901 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.659480 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae994946-ceb2-4a89-af8d-ac902ea62c90-etc-machine-id\") pod \"ae994946-ceb2-4a89-af8d-ac902ea62c90\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.659954 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data\") pod \"ae994946-ceb2-4a89-af8d-ac902ea62c90\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.660076 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae994946-ceb2-4a89-af8d-ac902ea62c90-logs\") pod \"ae994946-ceb2-4a89-af8d-ac902ea62c90\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.660183 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data-custom\") pod \"ae994946-ceb2-4a89-af8d-ac902ea62c90\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.660217 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-scripts\") pod \"ae994946-ceb2-4a89-af8d-ac902ea62c90\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.660460 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae994946-ceb2-4a89-af8d-ac902ea62c90-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ae994946-ceb2-4a89-af8d-ac902ea62c90" (UID: "ae994946-ceb2-4a89-af8d-ac902ea62c90"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.660919 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjbvn\" (UniqueName: \"kubernetes.io/projected/ae994946-ceb2-4a89-af8d-ac902ea62c90-kube-api-access-hjbvn\") pod \"ae994946-ceb2-4a89-af8d-ac902ea62c90\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.660957 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae994946-ceb2-4a89-af8d-ac902ea62c90-logs" (OuterVolumeSpecName: "logs") pod "ae994946-ceb2-4a89-af8d-ac902ea62c90" (UID: "ae994946-ceb2-4a89-af8d-ac902ea62c90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.661018 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-combined-ca-bundle\") pod \"ae994946-ceb2-4a89-af8d-ac902ea62c90\" (UID: \"ae994946-ceb2-4a89-af8d-ac902ea62c90\") " Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.661655 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae994946-ceb2-4a89-af8d-ac902ea62c90-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.661680 4895 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae994946-ceb2-4a89-af8d-ac902ea62c90-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.672035 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-scripts" (OuterVolumeSpecName: "scripts") pod "ae994946-ceb2-4a89-af8d-ac902ea62c90" (UID: "ae994946-ceb2-4a89-af8d-ac902ea62c90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.672495 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ae994946-ceb2-4a89-af8d-ac902ea62c90" (UID: "ae994946-ceb2-4a89-af8d-ac902ea62c90"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.682109 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae994946-ceb2-4a89-af8d-ac902ea62c90-kube-api-access-hjbvn" (OuterVolumeSpecName: "kube-api-access-hjbvn") pod "ae994946-ceb2-4a89-af8d-ac902ea62c90" (UID: "ae994946-ceb2-4a89-af8d-ac902ea62c90"). InnerVolumeSpecName "kube-api-access-hjbvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.721962 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae994946-ceb2-4a89-af8d-ac902ea62c90" (UID: "ae994946-ceb2-4a89-af8d-ac902ea62c90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.735196 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data" (OuterVolumeSpecName: "config-data") pod "ae994946-ceb2-4a89-af8d-ac902ea62c90" (UID: "ae994946-ceb2-4a89-af8d-ac902ea62c90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.764463 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.764509 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.764520 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.764532 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae994946-ceb2-4a89-af8d-ac902ea62c90-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.764542 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjbvn\" (UniqueName: \"kubernetes.io/projected/ae994946-ceb2-4a89-af8d-ac902ea62c90-kube-api-access-hjbvn\") on node \"crc\" DevicePath \"\"" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.912475 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5","Type":"ContainerStarted","Data":"3d81ec1ccf585fa30eb070500fa060e3f4d854ea8d9a45f5d2c9c404ca72eb3b"} Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.915764 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae994946-ceb2-4a89-af8d-ac902ea62c90" containerID="6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975" exitCode=0 Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.915795 4895 generic.go:334] "Generic (PLEG): container finished" podID="ae994946-ceb2-4a89-af8d-ac902ea62c90" containerID="31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db" exitCode=143 Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.915975 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.916522 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ae994946-ceb2-4a89-af8d-ac902ea62c90","Type":"ContainerDied","Data":"6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975"} Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.916601 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ae994946-ceb2-4a89-af8d-ac902ea62c90","Type":"ContainerDied","Data":"31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db"} Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.916619 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"ae994946-ceb2-4a89-af8d-ac902ea62c90","Type":"ContainerDied","Data":"75097466a9e32015f88b9fbbd6f3ed28a72f07dfabaf161a5cb64ade1bab1049"} Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.916643 4895 scope.go:117] "RemoveContainer" containerID="6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.938567 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.023453307 podStartE2EDuration="5.938543956s" podCreationTimestamp="2026-01-29 17:08:49 +0000 UTC" firstStartedPulling="2026-01-29 17:08:51.242864047 +0000 UTC m=+3415.045841311" lastFinishedPulling="2026-01-29 17:08:53.157954696 +0000 UTC m=+3416.960931960" observedRunningTime="2026-01-29 17:08:54.933597672 +0000 UTC m=+3418.736574946" watchObservedRunningTime="2026-01-29 17:08:54.938543956 +0000 UTC m=+3418.741521240" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.964774 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.986988 4895 scope.go:117] "RemoveContainer" containerID="31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db" Jan 29 17:08:54 crc kubenswrapper[4895]: I0129 17:08:54.991790 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.011482 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 29 17:08:55 crc kubenswrapper[4895]: E0129 17:08:55.011961 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae994946-ceb2-4a89-af8d-ac902ea62c90" containerName="manila-api-log" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.011980 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae994946-ceb2-4a89-af8d-ac902ea62c90" containerName="manila-api-log" Jan 29 17:08:55 crc kubenswrapper[4895]: E0129 17:08:55.012022 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae994946-ceb2-4a89-af8d-ac902ea62c90" containerName="manila-api" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.012028 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae994946-ceb2-4a89-af8d-ac902ea62c90" containerName="manila-api" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.012209 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae994946-ceb2-4a89-af8d-ac902ea62c90" containerName="manila-api-log" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.012235 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae994946-ceb2-4a89-af8d-ac902ea62c90" containerName="manila-api" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.013194 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.016190 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.016574 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.016730 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.041696 4895 scope.go:117] "RemoveContainer" containerID="6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975" Jan 29 17:08:55 crc kubenswrapper[4895]: E0129 17:08:55.050948 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975\": container with ID starting with 6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975 not found: ID does not exist" containerID="6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.051256 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975"} err="failed to get container status \"6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975\": rpc error: code = NotFound desc = could not find container \"6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975\": container with ID starting with 6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975 not found: ID does not exist" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.051296 4895 scope.go:117] "RemoveContainer" containerID="31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db" Jan 29 17:08:55 crc kubenswrapper[4895]: E0129 17:08:55.053468 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db\": container with ID starting with 31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db not found: ID does not exist" containerID="31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.053537 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db"} err="failed to get container status \"31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db\": rpc error: code = NotFound desc = could not find container \"31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db\": container with ID starting with 31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db not found: ID does not exist" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.055051 4895 scope.go:117] "RemoveContainer" containerID="6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.060440 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975"} err="failed to get container status \"6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975\": rpc error: code = NotFound desc = could not find container \"6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975\": container with ID starting with 6157b0acfb4a12a85bb95643442cb6dac31538e974c7bedd9031e5c3e3ccb975 not found: ID does not exist" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.060515 4895 scope.go:117] "RemoveContainer" containerID="31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.062503 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db"} err="failed to get container status \"31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db\": rpc error: code = NotFound desc = could not find container \"31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db\": container with ID starting with 31117ff0c16decb6ad5b83c085c6083fe0a7a0ec7684055d1432e2acb3df35db not found: ID does not exist" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.110511 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae994946-ceb2-4a89-af8d-ac902ea62c90" path="/var/lib/kubelet/pods/ae994946-ceb2-4a89-af8d-ac902ea62c90/volumes" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.111197 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.178230 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-internal-tls-certs\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.178330 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-etc-machine-id\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.178368 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-public-tls-certs\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.178415 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-scripts\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.178477 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-config-data-custom\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.178531 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.178587 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-config-data\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.178637 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-logs\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.178660 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qgr2\" (UniqueName: \"kubernetes.io/projected/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-kube-api-access-5qgr2\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.280278 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-config-data-custom\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.280362 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.280415 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-config-data\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.280453 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-logs\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.280470 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qgr2\" (UniqueName: \"kubernetes.io/projected/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-kube-api-access-5qgr2\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.280502 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-internal-tls-certs\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.280527 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-etc-machine-id\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.280565 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-public-tls-certs\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.280596 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-scripts\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.281029 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-etc-machine-id\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.281080 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-logs\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.286922 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.287002 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-public-tls-certs\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.287855 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-scripts\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.293085 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-config-data\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.293984 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-config-data-custom\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.295751 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-internal-tls-certs\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.300563 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qgr2\" (UniqueName: \"kubernetes.io/projected/c1411d2f-0b19-4ba7-bca3-0e19bfaa3002-kube-api-access-5qgr2\") pod \"manila-api-0\" (UID: \"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002\") " pod="openstack/manila-api-0" Jan 29 17:08:55 crc kubenswrapper[4895]: I0129 17:08:55.355001 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.398446 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.399071 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="ceilometer-central-agent" containerID="cri-o://0916f4c8b5fef1c16cda0e0a7c9e21e24169b8f300b33e66a6f19bbe7420fa19" gracePeriod=30 Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.399554 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="proxy-httpd" containerID="cri-o://38da4e7eaf8beb1ea68be8c59c0506c5e1ddd6eb195901ef4dd58eb504b224dd" gracePeriod=30 Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.399595 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="sg-core" containerID="cri-o://3bafd5d9bdeba9997c1409b4780f74c5177c9c81eb08f66013eba43652dc631f" gracePeriod=30 Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.399631 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="ceilometer-notification-agent" containerID="cri-o://acc83029a70f2cb503d0d85246be47849baa775380e49e55638e306b861fe414" gracePeriod=30 Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.670332 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.946202 4895 generic.go:334] "Generic (PLEG): container finished" podID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerID="38da4e7eaf8beb1ea68be8c59c0506c5e1ddd6eb195901ef4dd58eb504b224dd" exitCode=0 Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.946235 4895 generic.go:334] "Generic (PLEG): container finished" podID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerID="3bafd5d9bdeba9997c1409b4780f74c5177c9c81eb08f66013eba43652dc631f" exitCode=2 Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.946243 4895 generic.go:334] "Generic (PLEG): container finished" podID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerID="0916f4c8b5fef1c16cda0e0a7c9e21e24169b8f300b33e66a6f19bbe7420fa19" exitCode=0 Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.946287 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5","Type":"ContainerDied","Data":"38da4e7eaf8beb1ea68be8c59c0506c5e1ddd6eb195901ef4dd58eb504b224dd"} Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.946336 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5","Type":"ContainerDied","Data":"3bafd5d9bdeba9997c1409b4780f74c5177c9c81eb08f66013eba43652dc631f"} Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.946351 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5","Type":"ContainerDied","Data":"0916f4c8b5fef1c16cda0e0a7c9e21e24169b8f300b33e66a6f19bbe7420fa19"} Jan 29 17:08:56 crc kubenswrapper[4895]: I0129 17:08:56.948055 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002","Type":"ContainerStarted","Data":"fa2fd7308ddace9fac4b5368d22a2428d088cfa802bcf727169f1f1a331ad903"} Jan 29 17:08:57 crc kubenswrapper[4895]: I0129 17:08:57.962332 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002","Type":"ContainerStarted","Data":"094f31ea6c82272ea3d5154f29d657800aba415ec200a200e651ebc54320c1dd"} Jan 29 17:08:59 crc kubenswrapper[4895]: I0129 17:08:59.982389 4895 generic.go:334] "Generic (PLEG): container finished" podID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerID="acc83029a70f2cb503d0d85246be47849baa775380e49e55638e306b861fe414" exitCode=0 Jan 29 17:08:59 crc kubenswrapper[4895]: I0129 17:08:59.982909 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5","Type":"ContainerDied","Data":"acc83029a70f2cb503d0d85246be47849baa775380e49e55638e306b861fe414"} Jan 29 17:09:00 crc kubenswrapper[4895]: E0129 17:09:00.038252 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.418771 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.523107 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-9bxf2" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.550822 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.581492 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-rxb7m"] Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.581775 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" podUID="27cd5ecf-82d4-4495-8e11-7ae1b73e6506" containerName="dnsmasq-dns" containerID="cri-o://397a7186e9f85393d3bcaa7cc82cae0af0174d2af02353d34147c282271e8b91" gracePeriod=10 Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.695607 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-config-data\") pod \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.695801 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-combined-ca-bundle\") pod \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.695946 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-scripts\") pod \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.695997 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nc5s\" (UniqueName: \"kubernetes.io/projected/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-kube-api-access-4nc5s\") pod \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.696056 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-log-httpd\") pod \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.696097 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-sg-core-conf-yaml\") pod \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.696136 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-ceilometer-tls-certs\") pod \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.696216 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-run-httpd\") pod \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\" (UID: \"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5\") " Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.698532 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" (UID: "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.698799 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" (UID: "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.706904 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-scripts" (OuterVolumeSpecName: "scripts") pod "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" (UID: "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.723022 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-kube-api-access-4nc5s" (OuterVolumeSpecName: "kube-api-access-4nc5s") pod "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" (UID: "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5"). InnerVolumeSpecName "kube-api-access-4nc5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.750368 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" (UID: "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.762648 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" (UID: "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.786545 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" (UID: "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.799421 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.799455 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nc5s\" (UniqueName: \"kubernetes.io/projected/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-kube-api-access-4nc5s\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.799468 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.799477 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.799485 4895 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.799493 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.799501 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.821846 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-config-data" (OuterVolumeSpecName: "config-data") pod "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" (UID: "bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.901108 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.994684 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"c1411d2f-0b19-4ba7-bca3-0e19bfaa3002","Type":"ContainerStarted","Data":"2da87076c95f719dad9734ecf8c9c4846ea0ef5fa0482c199bdad0c107581cac"} Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.998221 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.998212 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5","Type":"ContainerDied","Data":"5b5136d57b64cd1350daddad47e391c1a81264992a26331ee6078c47777f069a"} Jan 29 17:09:00 crc kubenswrapper[4895]: I0129 17:09:00.998380 4895 scope.go:117] "RemoveContainer" containerID="38da4e7eaf8beb1ea68be8c59c0506c5e1ddd6eb195901ef4dd58eb504b224dd" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.000244 4895 generic.go:334] "Generic (PLEG): container finished" podID="27cd5ecf-82d4-4495-8e11-7ae1b73e6506" containerID="397a7186e9f85393d3bcaa7cc82cae0af0174d2af02353d34147c282271e8b91" exitCode=0 Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.000291 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" event={"ID":"27cd5ecf-82d4-4495-8e11-7ae1b73e6506","Type":"ContainerDied","Data":"397a7186e9f85393d3bcaa7cc82cae0af0174d2af02353d34147c282271e8b91"} Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.035046 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=7.035015118 podStartE2EDuration="7.035015118s" podCreationTimestamp="2026-01-29 17:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:01.01924039 +0000 UTC m=+3424.822217654" watchObservedRunningTime="2026-01-29 17:09:01.035015118 +0000 UTC m=+3424.837992382" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.065126 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:01 crc kubenswrapper[4895]: E0129 17:09:01.067846 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.087933 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.097815 4895 scope.go:117] "RemoveContainer" containerID="3bafd5d9bdeba9997c1409b4780f74c5177c9c81eb08f66013eba43652dc631f" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.100654 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:01 crc kubenswrapper[4895]: E0129 17:09:01.101742 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="proxy-httpd" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.101765 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="proxy-httpd" Jan 29 17:09:01 crc kubenswrapper[4895]: E0129 17:09:01.101795 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="ceilometer-notification-agent" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.101805 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="ceilometer-notification-agent" Jan 29 17:09:01 crc kubenswrapper[4895]: E0129 17:09:01.102260 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="ceilometer-central-agent" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.102278 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="ceilometer-central-agent" Jan 29 17:09:01 crc kubenswrapper[4895]: E0129 17:09:01.102287 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="sg-core" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.102293 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="sg-core" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.102755 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="ceilometer-notification-agent" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.102794 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="ceilometer-central-agent" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.102808 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="proxy-httpd" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.102824 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" containerName="sg-core" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.134385 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.136978 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.137431 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.137682 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.146519 4895 scope.go:117] "RemoveContainer" containerID="acc83029a70f2cb503d0d85246be47849baa775380e49e55638e306b861fe414" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.150299 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.168528 4895 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" podUID="27cd5ecf-82d4-4495-8e11-7ae1b73e6506" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.200:5353: connect: connection refused" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.310272 4895 scope.go:117] "RemoveContainer" containerID="0916f4c8b5fef1c16cda0e0a7c9e21e24169b8f300b33e66a6f19bbe7420fa19" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.317778 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvldc\" (UniqueName: \"kubernetes.io/projected/6aca14c3-502a-4d42-98e6-f7c1994576a5-kube-api-access-vvldc\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.318144 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-scripts\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.318217 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-log-httpd\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.318355 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-run-httpd\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.318489 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.318542 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-config-data\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.318588 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.318727 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.420784 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.420844 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-config-data\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.420892 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.420970 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.421057 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvldc\" (UniqueName: \"kubernetes.io/projected/6aca14c3-502a-4d42-98e6-f7c1994576a5-kube-api-access-vvldc\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.421150 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-scripts\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.421187 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-log-httpd\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.421222 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-run-httpd\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.422154 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-run-httpd\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.423424 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-log-httpd\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.428912 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.429277 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.429340 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.432732 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-config-data\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.432741 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-scripts\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.445004 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvldc\" (UniqueName: \"kubernetes.io/projected/6aca14c3-502a-4d42-98e6-f7c1994576a5-kube-api-access-vvldc\") pod \"ceilometer-0\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.450612 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.453181 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.623790 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-openstack-edpm-ipam\") pod \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.624022 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvg2k\" (UniqueName: \"kubernetes.io/projected/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-kube-api-access-lvg2k\") pod \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.624059 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-config\") pod \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.624133 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-sb\") pod \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.624218 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-nb\") pod \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.624297 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-dns-svc\") pod \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\" (UID: \"27cd5ecf-82d4-4495-8e11-7ae1b73e6506\") " Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.632566 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-kube-api-access-lvg2k" (OuterVolumeSpecName: "kube-api-access-lvg2k") pod "27cd5ecf-82d4-4495-8e11-7ae1b73e6506" (UID: "27cd5ecf-82d4-4495-8e11-7ae1b73e6506"). InnerVolumeSpecName "kube-api-access-lvg2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.722710 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-config" (OuterVolumeSpecName: "config") pod "27cd5ecf-82d4-4495-8e11-7ae1b73e6506" (UID: "27cd5ecf-82d4-4495-8e11-7ae1b73e6506"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.726107 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "27cd5ecf-82d4-4495-8e11-7ae1b73e6506" (UID: "27cd5ecf-82d4-4495-8e11-7ae1b73e6506"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.726856 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.726899 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvg2k\" (UniqueName: \"kubernetes.io/projected/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-kube-api-access-lvg2k\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.726910 4895 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.731047 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27cd5ecf-82d4-4495-8e11-7ae1b73e6506" (UID: "27cd5ecf-82d4-4495-8e11-7ae1b73e6506"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.738963 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27cd5ecf-82d4-4495-8e11-7ae1b73e6506" (UID: "27cd5ecf-82d4-4495-8e11-7ae1b73e6506"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.743936 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27cd5ecf-82d4-4495-8e11-7ae1b73e6506" (UID: "27cd5ecf-82d4-4495-8e11-7ae1b73e6506"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.829699 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.829755 4895 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.829766 4895 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27cd5ecf-82d4-4495-8e11-7ae1b73e6506-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:01 crc kubenswrapper[4895]: I0129 17:09:01.910243 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:01 crc kubenswrapper[4895]: W0129 17:09:01.921835 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6aca14c3_502a_4d42_98e6_f7c1994576a5.slice/crio-a18a31c56928e6fb93dec4057383494a4f2efb05c7a9e80b69b9ec1f52f506ce WatchSource:0}: Error finding container a18a31c56928e6fb93dec4057383494a4f2efb05c7a9e80b69b9ec1f52f506ce: Status 404 returned error can't find the container with id a18a31c56928e6fb93dec4057383494a4f2efb05c7a9e80b69b9ec1f52f506ce Jan 29 17:09:02 crc kubenswrapper[4895]: I0129 17:09:02.020550 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" event={"ID":"27cd5ecf-82d4-4495-8e11-7ae1b73e6506","Type":"ContainerDied","Data":"cda415dccd1e1c2dc355baab18bfc7e9d472d9ffbfa91742fc36d1dc04075a34"} Jan 29 17:09:02 crc kubenswrapper[4895]: I0129 17:09:02.020613 4895 scope.go:117] "RemoveContainer" containerID="397a7186e9f85393d3bcaa7cc82cae0af0174d2af02353d34147c282271e8b91" Jan 29 17:09:02 crc kubenswrapper[4895]: I0129 17:09:02.020755 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-rxb7m" Jan 29 17:09:02 crc kubenswrapper[4895]: I0129 17:09:02.028614 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9","Type":"ContainerStarted","Data":"c196f0dbf4a82478841c50323448a81369edb3eb481a21294a1dd9c01734f164"} Jan 29 17:09:02 crc kubenswrapper[4895]: I0129 17:09:02.032406 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aca14c3-502a-4d42-98e6-f7c1994576a5","Type":"ContainerStarted","Data":"a18a31c56928e6fb93dec4057383494a4f2efb05c7a9e80b69b9ec1f52f506ce"} Jan 29 17:09:02 crc kubenswrapper[4895]: I0129 17:09:02.032662 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 29 17:09:02 crc kubenswrapper[4895]: I0129 17:09:02.088648 4895 scope.go:117] "RemoveContainer" containerID="656b75a73b476dd50682d06760de0c8ef57df7303f790d4b1e44dcbe9a65f91e" Jan 29 17:09:02 crc kubenswrapper[4895]: I0129 17:09:02.092155 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-rxb7m"] Jan 29 17:09:02 crc kubenswrapper[4895]: I0129 17:09:02.101010 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-rxb7m"] Jan 29 17:09:03 crc kubenswrapper[4895]: I0129 17:09:03.060508 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27cd5ecf-82d4-4495-8e11-7ae1b73e6506" path="/var/lib/kubelet/pods/27cd5ecf-82d4-4495-8e11-7ae1b73e6506/volumes" Jan 29 17:09:03 crc kubenswrapper[4895]: I0129 17:09:03.061676 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5" path="/var/lib/kubelet/pods/bb94dd5f-dfa2-4df0-8ad2-1c768257a6e5/volumes" Jan 29 17:09:03 crc kubenswrapper[4895]: I0129 17:09:03.062655 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aca14c3-502a-4d42-98e6-f7c1994576a5","Type":"ContainerStarted","Data":"7e3b6cea6d9461292d9853167753e222eb4d14924f2962e9bb0faf6e354e7478"} Jan 29 17:09:03 crc kubenswrapper[4895]: I0129 17:09:03.062695 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9","Type":"ContainerStarted","Data":"9b285f3e7e878670a11f81f94409822b57e3bcee9a2ea5b854ee1a5b9122d147"} Jan 29 17:09:03 crc kubenswrapper[4895]: I0129 17:09:03.090373 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.336864249 podStartE2EDuration="13.09035175s" podCreationTimestamp="2026-01-29 17:08:50 +0000 UTC" firstStartedPulling="2026-01-29 17:08:51.394718537 +0000 UTC m=+3415.197695801" lastFinishedPulling="2026-01-29 17:09:01.148206038 +0000 UTC m=+3424.951183302" observedRunningTime="2026-01-29 17:09:03.071299703 +0000 UTC m=+3426.874276977" watchObservedRunningTime="2026-01-29 17:09:03.09035175 +0000 UTC m=+3426.893329014" Jan 29 17:09:03 crc kubenswrapper[4895]: I0129 17:09:03.860051 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:04 crc kubenswrapper[4895]: I0129 17:09:04.061028 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aca14c3-502a-4d42-98e6-f7c1994576a5","Type":"ContainerStarted","Data":"6eb2e8d0f471f6c6d423e118dcf9f83719209e6fd7b6d0fa390275a2e48d9885"} Jan 29 17:09:05 crc kubenswrapper[4895]: I0129 17:09:05.037431 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:09:05 crc kubenswrapper[4895]: I0129 17:09:05.074169 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aca14c3-502a-4d42-98e6-f7c1994576a5","Type":"ContainerStarted","Data":"94828e7e7a3851318e67d5531bcbbe05375c1a6a1946dcf99baa786434ee4a8c"} Jan 29 17:09:06 crc kubenswrapper[4895]: E0129 17:09:06.038459 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:09:06 crc kubenswrapper[4895]: I0129 17:09:06.089902 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"8521355b0f55a845d2778709168b873fa7370171e7b215ad9f2d5fc6646fbd29"} Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.113314 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aca14c3-502a-4d42-98e6-f7c1994576a5","Type":"ContainerStarted","Data":"11dcbe5c6fcafa28ac931a33b7145cdb3e8d2477c86b9e6d31a2b96369789363"} Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.114109 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="proxy-httpd" containerID="cri-o://11dcbe5c6fcafa28ac931a33b7145cdb3e8d2477c86b9e6d31a2b96369789363" gracePeriod=30 Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.114137 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="ceilometer-notification-agent" containerID="cri-o://6eb2e8d0f471f6c6d423e118dcf9f83719209e6fd7b6d0fa390275a2e48d9885" gracePeriod=30 Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.114376 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.114133 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="sg-core" containerID="cri-o://94828e7e7a3851318e67d5531bcbbe05375c1a6a1946dcf99baa786434ee4a8c" gracePeriod=30 Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.114430 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="ceilometer-central-agent" containerID="cri-o://7e3b6cea6d9461292d9853167753e222eb4d14924f2962e9bb0faf6e354e7478" gracePeriod=30 Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.144605 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.394714306 podStartE2EDuration="7.144580699s" podCreationTimestamp="2026-01-29 17:09:01 +0000 UTC" firstStartedPulling="2026-01-29 17:09:01.925898263 +0000 UTC m=+3425.728875527" lastFinishedPulling="2026-01-29 17:09:06.675764656 +0000 UTC m=+3430.478741920" observedRunningTime="2026-01-29 17:09:08.141818074 +0000 UTC m=+3431.944795338" watchObservedRunningTime="2026-01-29 17:09:08.144580699 +0000 UTC m=+3431.947557973" Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.937160 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7x2t6"] Jan 29 17:09:08 crc kubenswrapper[4895]: E0129 17:09:08.938142 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27cd5ecf-82d4-4495-8e11-7ae1b73e6506" containerName="init" Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.938169 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="27cd5ecf-82d4-4495-8e11-7ae1b73e6506" containerName="init" Jan 29 17:09:08 crc kubenswrapper[4895]: E0129 17:09:08.938185 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27cd5ecf-82d4-4495-8e11-7ae1b73e6506" containerName="dnsmasq-dns" Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.938194 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="27cd5ecf-82d4-4495-8e11-7ae1b73e6506" containerName="dnsmasq-dns" Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.938463 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="27cd5ecf-82d4-4495-8e11-7ae1b73e6506" containerName="dnsmasq-dns" Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.940580 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:08 crc kubenswrapper[4895]: I0129 17:09:08.950818 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x2t6"] Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.099814 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-catalog-content\") pod \"redhat-marketplace-7x2t6\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.099894 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-utilities\") pod \"redhat-marketplace-7x2t6\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.099917 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt2vl\" (UniqueName: \"kubernetes.io/projected/80355c10-d1b0-418c-a7b9-20366486946f-kube-api-access-xt2vl\") pod \"redhat-marketplace-7x2t6\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.134595 4895 generic.go:334] "Generic (PLEG): container finished" podID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerID="11dcbe5c6fcafa28ac931a33b7145cdb3e8d2477c86b9e6d31a2b96369789363" exitCode=0 Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.134626 4895 generic.go:334] "Generic (PLEG): container finished" podID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerID="94828e7e7a3851318e67d5531bcbbe05375c1a6a1946dcf99baa786434ee4a8c" exitCode=2 Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.134636 4895 generic.go:334] "Generic (PLEG): container finished" podID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerID="6eb2e8d0f471f6c6d423e118dcf9f83719209e6fd7b6d0fa390275a2e48d9885" exitCode=0 Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.134645 4895 generic.go:334] "Generic (PLEG): container finished" podID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerID="7e3b6cea6d9461292d9853167753e222eb4d14924f2962e9bb0faf6e354e7478" exitCode=0 Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.134669 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aca14c3-502a-4d42-98e6-f7c1994576a5","Type":"ContainerDied","Data":"11dcbe5c6fcafa28ac931a33b7145cdb3e8d2477c86b9e6d31a2b96369789363"} Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.134699 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aca14c3-502a-4d42-98e6-f7c1994576a5","Type":"ContainerDied","Data":"94828e7e7a3851318e67d5531bcbbe05375c1a6a1946dcf99baa786434ee4a8c"} Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.134715 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aca14c3-502a-4d42-98e6-f7c1994576a5","Type":"ContainerDied","Data":"6eb2e8d0f471f6c6d423e118dcf9f83719209e6fd7b6d0fa390275a2e48d9885"} Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.134726 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aca14c3-502a-4d42-98e6-f7c1994576a5","Type":"ContainerDied","Data":"7e3b6cea6d9461292d9853167753e222eb4d14924f2962e9bb0faf6e354e7478"} Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.203150 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-catalog-content\") pod \"redhat-marketplace-7x2t6\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.203203 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-utilities\") pod \"redhat-marketplace-7x2t6\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.203221 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt2vl\" (UniqueName: \"kubernetes.io/projected/80355c10-d1b0-418c-a7b9-20366486946f-kube-api-access-xt2vl\") pod \"redhat-marketplace-7x2t6\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.203798 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-catalog-content\") pod \"redhat-marketplace-7x2t6\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.203858 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-utilities\") pod \"redhat-marketplace-7x2t6\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.223160 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt2vl\" (UniqueName: \"kubernetes.io/projected/80355c10-d1b0-418c-a7b9-20366486946f-kube-api-access-xt2vl\") pod \"redhat-marketplace-7x2t6\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.269440 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.447489 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.611143 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-run-httpd\") pod \"6aca14c3-502a-4d42-98e6-f7c1994576a5\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.611318 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvldc\" (UniqueName: \"kubernetes.io/projected/6aca14c3-502a-4d42-98e6-f7c1994576a5-kube-api-access-vvldc\") pod \"6aca14c3-502a-4d42-98e6-f7c1994576a5\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.611683 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6aca14c3-502a-4d42-98e6-f7c1994576a5" (UID: "6aca14c3-502a-4d42-98e6-f7c1994576a5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.612065 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-ceilometer-tls-certs\") pod \"6aca14c3-502a-4d42-98e6-f7c1994576a5\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.612115 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-scripts\") pod \"6aca14c3-502a-4d42-98e6-f7c1994576a5\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.612143 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-config-data\") pod \"6aca14c3-502a-4d42-98e6-f7c1994576a5\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.612182 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-log-httpd\") pod \"6aca14c3-502a-4d42-98e6-f7c1994576a5\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.612298 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-combined-ca-bundle\") pod \"6aca14c3-502a-4d42-98e6-f7c1994576a5\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.612359 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-sg-core-conf-yaml\") pod \"6aca14c3-502a-4d42-98e6-f7c1994576a5\" (UID: \"6aca14c3-502a-4d42-98e6-f7c1994576a5\") " Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.612777 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6aca14c3-502a-4d42-98e6-f7c1994576a5" (UID: "6aca14c3-502a-4d42-98e6-f7c1994576a5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.613421 4895 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.613450 4895 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aca14c3-502a-4d42-98e6-f7c1994576a5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.617465 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aca14c3-502a-4d42-98e6-f7c1994576a5-kube-api-access-vvldc" (OuterVolumeSpecName: "kube-api-access-vvldc") pod "6aca14c3-502a-4d42-98e6-f7c1994576a5" (UID: "6aca14c3-502a-4d42-98e6-f7c1994576a5"). InnerVolumeSpecName "kube-api-access-vvldc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.617693 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-scripts" (OuterVolumeSpecName: "scripts") pod "6aca14c3-502a-4d42-98e6-f7c1994576a5" (UID: "6aca14c3-502a-4d42-98e6-f7c1994576a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.640499 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6aca14c3-502a-4d42-98e6-f7c1994576a5" (UID: "6aca14c3-502a-4d42-98e6-f7c1994576a5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.666049 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6aca14c3-502a-4d42-98e6-f7c1994576a5" (UID: "6aca14c3-502a-4d42-98e6-f7c1994576a5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.693376 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aca14c3-502a-4d42-98e6-f7c1994576a5" (UID: "6aca14c3-502a-4d42-98e6-f7c1994576a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.708702 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-config-data" (OuterVolumeSpecName: "config-data") pod "6aca14c3-502a-4d42-98e6-f7c1994576a5" (UID: "6aca14c3-502a-4d42-98e6-f7c1994576a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.715450 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvldc\" (UniqueName: \"kubernetes.io/projected/6aca14c3-502a-4d42-98e6-f7c1994576a5-kube-api-access-vvldc\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.715639 4895 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.715709 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.715759 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.715809 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.715861 4895 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6aca14c3-502a-4d42-98e6-f7c1994576a5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:09 crc kubenswrapper[4895]: W0129 17:09:09.763425 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80355c10_d1b0_418c_a7b9_20366486946f.slice/crio-5c114ab9ceb4ed25d180ed7bd66883827dd59d9d62f8bf505d673702f8b18693 WatchSource:0}: Error finding container 5c114ab9ceb4ed25d180ed7bd66883827dd59d9d62f8bf505d673702f8b18693: Status 404 returned error can't find the container with id 5c114ab9ceb4ed25d180ed7bd66883827dd59d9d62f8bf505d673702f8b18693 Jan 29 17:09:09 crc kubenswrapper[4895]: I0129 17:09:09.765625 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x2t6"] Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.146543 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aca14c3-502a-4d42-98e6-f7c1994576a5","Type":"ContainerDied","Data":"a18a31c56928e6fb93dec4057383494a4f2efb05c7a9e80b69b9ec1f52f506ce"} Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.146607 4895 scope.go:117] "RemoveContainer" containerID="11dcbe5c6fcafa28ac931a33b7145cdb3e8d2477c86b9e6d31a2b96369789363" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.146676 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.149823 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x2t6" event={"ID":"80355c10-d1b0-418c-a7b9-20366486946f","Type":"ContainerStarted","Data":"5c114ab9ceb4ed25d180ed7bd66883827dd59d9d62f8bf505d673702f8b18693"} Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.169601 4895 scope.go:117] "RemoveContainer" containerID="94828e7e7a3851318e67d5531bcbbe05375c1a6a1946dcf99baa786434ee4a8c" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.193182 4895 scope.go:117] "RemoveContainer" containerID="6eb2e8d0f471f6c6d423e118dcf9f83719209e6fd7b6d0fa390275a2e48d9885" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.197726 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.216487 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.229450 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:10 crc kubenswrapper[4895]: E0129 17:09:10.229937 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="sg-core" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.229966 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="sg-core" Jan 29 17:09:10 crc kubenswrapper[4895]: E0129 17:09:10.229989 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="ceilometer-notification-agent" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.229999 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="ceilometer-notification-agent" Jan 29 17:09:10 crc kubenswrapper[4895]: E0129 17:09:10.230016 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="ceilometer-central-agent" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.230025 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="ceilometer-central-agent" Jan 29 17:09:10 crc kubenswrapper[4895]: E0129 17:09:10.230038 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="proxy-httpd" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.230046 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="proxy-httpd" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.230266 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="proxy-httpd" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.230282 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="sg-core" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.230295 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="ceilometer-notification-agent" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.230312 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" containerName="ceilometer-central-agent" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.232102 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.237723 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.237771 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.237742 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.248640 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.327598 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.327671 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.327712 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-scripts\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.327744 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-config-data\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.327917 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad356ea5-8184-46f6-b58d-399b0a742239-log-httpd\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.328129 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad356ea5-8184-46f6-b58d-399b0a742239-run-httpd\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.328315 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.328543 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv2jm\" (UniqueName: \"kubernetes.io/projected/ad356ea5-8184-46f6-b58d-399b0a742239-kube-api-access-zv2jm\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.392455 4895 scope.go:117] "RemoveContainer" containerID="7e3b6cea6d9461292d9853167753e222eb4d14924f2962e9bb0faf6e354e7478" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.430244 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.430346 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv2jm\" (UniqueName: \"kubernetes.io/projected/ad356ea5-8184-46f6-b58d-399b0a742239-kube-api-access-zv2jm\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.430382 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.430419 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.430468 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-scripts\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.430509 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-config-data\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.430549 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad356ea5-8184-46f6-b58d-399b0a742239-log-httpd\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.430600 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad356ea5-8184-46f6-b58d-399b0a742239-run-httpd\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.431767 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad356ea5-8184-46f6-b58d-399b0a742239-run-httpd\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.431823 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad356ea5-8184-46f6-b58d-399b0a742239-log-httpd\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.433719 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.437013 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-scripts\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.437890 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.437980 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.442730 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-config-data\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.447170 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad356ea5-8184-46f6-b58d-399b0a742239-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.449210 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv2jm\" (UniqueName: \"kubernetes.io/projected/ad356ea5-8184-46f6-b58d-399b0a742239-kube-api-access-zv2jm\") pod \"ceilometer-0\" (UID: \"ad356ea5-8184-46f6-b58d-399b0a742239\") " pod="openstack/ceilometer-0" Jan 29 17:09:10 crc kubenswrapper[4895]: I0129 17:09:10.552401 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:09:11 crc kubenswrapper[4895]: E0129 17:09:11.045161 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:09:11 crc kubenswrapper[4895]: I0129 17:09:11.050277 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aca14c3-502a-4d42-98e6-f7c1994576a5" path="/var/lib/kubelet/pods/6aca14c3-502a-4d42-98e6-f7c1994576a5/volumes" Jan 29 17:09:11 crc kubenswrapper[4895]: I0129 17:09:11.060274 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:09:11 crc kubenswrapper[4895]: W0129 17:09:11.063376 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad356ea5_8184_46f6_b58d_399b0a742239.slice/crio-014b7ac430477f1e267728493ba5b02957cfa44604fdbd76e5978068c15a1417 WatchSource:0}: Error finding container 014b7ac430477f1e267728493ba5b02957cfa44604fdbd76e5978068c15a1417: Status 404 returned error can't find the container with id 014b7ac430477f1e267728493ba5b02957cfa44604fdbd76e5978068c15a1417 Jan 29 17:09:11 crc kubenswrapper[4895]: I0129 17:09:11.169552 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad356ea5-8184-46f6-b58d-399b0a742239","Type":"ContainerStarted","Data":"014b7ac430477f1e267728493ba5b02957cfa44604fdbd76e5978068c15a1417"} Jan 29 17:09:11 crc kubenswrapper[4895]: I0129 17:09:11.178217 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x2t6" event={"ID":"80355c10-d1b0-418c-a7b9-20366486946f","Type":"ContainerStarted","Data":"953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8"} Jan 29 17:09:12 crc kubenswrapper[4895]: I0129 17:09:12.193045 4895 generic.go:334] "Generic (PLEG): container finished" podID="80355c10-d1b0-418c-a7b9-20366486946f" containerID="953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8" exitCode=0 Jan 29 17:09:12 crc kubenswrapper[4895]: I0129 17:09:12.193133 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x2t6" event={"ID":"80355c10-d1b0-418c-a7b9-20366486946f","Type":"ContainerDied","Data":"953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8"} Jan 29 17:09:12 crc kubenswrapper[4895]: I0129 17:09:12.409173 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 29 17:09:12 crc kubenswrapper[4895]: I0129 17:09:12.451291 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 17:09:13 crc kubenswrapper[4895]: I0129 17:09:13.202455 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad356ea5-8184-46f6-b58d-399b0a742239","Type":"ContainerStarted","Data":"001c23214fb346705304446bf099212b58de72ad7f0fa9057539986c32dacf09"} Jan 29 17:09:13 crc kubenswrapper[4895]: I0129 17:09:13.202973 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" containerName="manila-scheduler" containerID="cri-o://6f6cf4844b42c6b02157e07d0d2bbcccce0d103277242bf328a5e9223884d4a8" gracePeriod=30 Jan 29 17:09:13 crc kubenswrapper[4895]: I0129 17:09:13.203458 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" containerName="probe" containerID="cri-o://3d81ec1ccf585fa30eb070500fa060e3f4d854ea8d9a45f5d2c9c404ca72eb3b" gracePeriod=30 Jan 29 17:09:14 crc kubenswrapper[4895]: I0129 17:09:14.229507 4895 generic.go:334] "Generic (PLEG): container finished" podID="37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" containerID="3d81ec1ccf585fa30eb070500fa060e3f4d854ea8d9a45f5d2c9c404ca72eb3b" exitCode=0 Jan 29 17:09:14 crc kubenswrapper[4895]: I0129 17:09:14.229907 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5","Type":"ContainerDied","Data":"3d81ec1ccf585fa30eb070500fa060e3f4d854ea8d9a45f5d2c9c404ca72eb3b"} Jan 29 17:09:15 crc kubenswrapper[4895]: I0129 17:09:15.246032 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad356ea5-8184-46f6-b58d-399b0a742239","Type":"ContainerStarted","Data":"6175ae320d43176796d3b718fd6f459610ab95be86a69e595b74eb3ad7686e5e"} Jan 29 17:09:15 crc kubenswrapper[4895]: I0129 17:09:15.249743 4895 generic.go:334] "Generic (PLEG): container finished" podID="80355c10-d1b0-418c-a7b9-20366486946f" containerID="1dd87a45cbd8f8e1c4126bf99c482fda0c47ada0f3fbf3ff9527e261e15748fb" exitCode=0 Jan 29 17:09:15 crc kubenswrapper[4895]: I0129 17:09:15.249784 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x2t6" event={"ID":"80355c10-d1b0-418c-a7b9-20366486946f","Type":"ContainerDied","Data":"1dd87a45cbd8f8e1c4126bf99c482fda0c47ada0f3fbf3ff9527e261e15748fb"} Jan 29 17:09:16 crc kubenswrapper[4895]: E0129 17:09:16.041223 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:09:17 crc kubenswrapper[4895]: E0129 17:09:17.045148 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:09:18 crc kubenswrapper[4895]: I0129 17:09:18.284154 4895 generic.go:334] "Generic (PLEG): container finished" podID="37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" containerID="6f6cf4844b42c6b02157e07d0d2bbcccce0d103277242bf328a5e9223884d4a8" exitCode=0 Jan 29 17:09:18 crc kubenswrapper[4895]: I0129 17:09:18.284324 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5","Type":"ContainerDied","Data":"6f6cf4844b42c6b02157e07d0d2bbcccce0d103277242bf328a5e9223884d4a8"} Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.162233 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.300086 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad356ea5-8184-46f6-b58d-399b0a742239","Type":"ContainerStarted","Data":"d5f6e5b7a5f51f0411438a9c462c65bcf0efe6fd71d98da14a41b3d6631f6571"} Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.302797 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.302790 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5","Type":"ContainerDied","Data":"5b31c673a720d5283137a7691c852b9ed4ee7f30c7ff9e3f5bc779acfca01d95"} Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.302935 4895 scope.go:117] "RemoveContainer" containerID="3d81ec1ccf585fa30eb070500fa060e3f4d854ea8d9a45f5d2c9c404ca72eb3b" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.305840 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x2t6" event={"ID":"80355c10-d1b0-418c-a7b9-20366486946f","Type":"ContainerStarted","Data":"98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba"} Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.324689 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7x2t6" podStartSLOduration=5.049075826 podStartE2EDuration="11.324673721s" podCreationTimestamp="2026-01-29 17:09:08 +0000 UTC" firstStartedPulling="2026-01-29 17:09:12.194981479 +0000 UTC m=+3435.997958743" lastFinishedPulling="2026-01-29 17:09:18.470579374 +0000 UTC m=+3442.273556638" observedRunningTime="2026-01-29 17:09:19.323393766 +0000 UTC m=+3443.126371040" watchObservedRunningTime="2026-01-29 17:09:19.324673721 +0000 UTC m=+3443.127650985" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.328311 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-etc-machine-id\") pod \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.328362 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data\") pod \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.328431 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" (UID: "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.328466 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjkfj\" (UniqueName: \"kubernetes.io/projected/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-kube-api-access-mjkfj\") pod \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.328494 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data-custom\") pod \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.328531 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-combined-ca-bundle\") pod \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.328653 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-scripts\") pod \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\" (UID: \"37b3c9db-3979-4e8c-b9b2-69a4cd1608f5\") " Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.329199 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.334426 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-scripts" (OuterVolumeSpecName: "scripts") pod "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" (UID: "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.338652 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-kube-api-access-mjkfj" (OuterVolumeSpecName: "kube-api-access-mjkfj") pod "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" (UID: "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5"). InnerVolumeSpecName "kube-api-access-mjkfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.354437 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" (UID: "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.362495 4895 scope.go:117] "RemoveContainer" containerID="6f6cf4844b42c6b02157e07d0d2bbcccce0d103277242bf328a5e9223884d4a8" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.404703 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" (UID: "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.431166 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjkfj\" (UniqueName: \"kubernetes.io/projected/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-kube-api-access-mjkfj\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.431204 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.431213 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.431222 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.443815 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data" (OuterVolumeSpecName: "config-data") pod "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" (UID: "37b3c9db-3979-4e8c-b9b2-69a4cd1608f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.533019 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.536524 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.640169 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.655704 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.675726 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 17:09:19 crc kubenswrapper[4895]: E0129 17:09:19.676274 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" containerName="probe" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.676296 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" containerName="probe" Jan 29 17:09:19 crc kubenswrapper[4895]: E0129 17:09:19.676314 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" containerName="manila-scheduler" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.676324 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" containerName="manila-scheduler" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.676590 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" containerName="manila-scheduler" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.676615 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" containerName="probe" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.677923 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.687930 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.687945 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.840250 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.840327 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-scripts\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.840402 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.840465 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p88ws\" (UniqueName: \"kubernetes.io/projected/ca80fce3-90df-492f-8819-1df2e246b1b5-kube-api-access-p88ws\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.840709 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-config-data\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.840949 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca80fce3-90df-492f-8819-1df2e246b1b5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.943040 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-config-data\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.943617 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca80fce3-90df-492f-8819-1df2e246b1b5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.943693 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.943729 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-scripts\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.943766 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca80fce3-90df-492f-8819-1df2e246b1b5-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.943848 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.943959 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p88ws\" (UniqueName: \"kubernetes.io/projected/ca80fce3-90df-492f-8819-1df2e246b1b5-kube-api-access-p88ws\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.948675 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.949171 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-config-data\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.949532 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.950369 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca80fce3-90df-492f-8819-1df2e246b1b5-scripts\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:19 crc kubenswrapper[4895]: I0129 17:09:19.962677 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p88ws\" (UniqueName: \"kubernetes.io/projected/ca80fce3-90df-492f-8819-1df2e246b1b5-kube-api-access-p88ws\") pod \"manila-scheduler-0\" (UID: \"ca80fce3-90df-492f-8819-1df2e246b1b5\") " pod="openstack/manila-scheduler-0" Jan 29 17:09:20 crc kubenswrapper[4895]: I0129 17:09:20.010737 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 29 17:09:20 crc kubenswrapper[4895]: I0129 17:09:20.471809 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 29 17:09:20 crc kubenswrapper[4895]: W0129 17:09:20.482513 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca80fce3_90df_492f_8819_1df2e246b1b5.slice/crio-a763baf535df2d4a25cd422b50aff7d81e7ff3d57d738aa95b80b2cf09a32a6d WatchSource:0}: Error finding container a763baf535df2d4a25cd422b50aff7d81e7ff3d57d738aa95b80b2cf09a32a6d: Status 404 returned error can't find the container with id a763baf535df2d4a25cd422b50aff7d81e7ff3d57d738aa95b80b2cf09a32a6d Jan 29 17:09:21 crc kubenswrapper[4895]: I0129 17:09:21.057400 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b3c9db-3979-4e8c-b9b2-69a4cd1608f5" path="/var/lib/kubelet/pods/37b3c9db-3979-4e8c-b9b2-69a4cd1608f5/volumes" Jan 29 17:09:21 crc kubenswrapper[4895]: I0129 17:09:21.334158 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ca80fce3-90df-492f-8819-1df2e246b1b5","Type":"ContainerStarted","Data":"adc7093eeb15804937011f29cf51c500b9f42f2878941c56a769e848a2d54bd0"} Jan 29 17:09:21 crc kubenswrapper[4895]: I0129 17:09:21.334229 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ca80fce3-90df-492f-8819-1df2e246b1b5","Type":"ContainerStarted","Data":"a763baf535df2d4a25cd422b50aff7d81e7ff3d57d738aa95b80b2cf09a32a6d"} Jan 29 17:09:22 crc kubenswrapper[4895]: I0129 17:09:22.252797 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 29 17:09:22 crc kubenswrapper[4895]: I0129 17:09:22.342423 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 17:09:22 crc kubenswrapper[4895]: I0129 17:09:22.347618 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" containerName="manila-share" containerID="cri-o://c196f0dbf4a82478841c50323448a81369edb3eb481a21294a1dd9c01734f164" gracePeriod=30 Jan 29 17:09:22 crc kubenswrapper[4895]: I0129 17:09:22.347788 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"ca80fce3-90df-492f-8819-1df2e246b1b5","Type":"ContainerStarted","Data":"058f9f909e45d01dcc8530d2eb4d336f18d851d7d55c57b677a32ec8bd47923b"} Jan 29 17:09:22 crc kubenswrapper[4895]: I0129 17:09:22.347925 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" containerName="probe" containerID="cri-o://9b285f3e7e878670a11f81f94409822b57e3bcee9a2ea5b854ee1a5b9122d147" gracePeriod=30 Jan 29 17:09:22 crc kubenswrapper[4895]: I0129 17:09:22.392155 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.392134008 podStartE2EDuration="3.392134008s" podCreationTimestamp="2026-01-29 17:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:22.380928784 +0000 UTC m=+3446.183906068" watchObservedRunningTime="2026-01-29 17:09:22.392134008 +0000 UTC m=+3446.195111282" Jan 29 17:09:23 crc kubenswrapper[4895]: E0129 17:09:23.038956 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" Jan 29 17:09:23 crc kubenswrapper[4895]: I0129 17:09:23.360770 4895 generic.go:334] "Generic (PLEG): container finished" podID="13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" containerID="9b285f3e7e878670a11f81f94409822b57e3bcee9a2ea5b854ee1a5b9122d147" exitCode=0 Jan 29 17:09:23 crc kubenswrapper[4895]: I0129 17:09:23.360805 4895 generic.go:334] "Generic (PLEG): container finished" podID="13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" containerID="c196f0dbf4a82478841c50323448a81369edb3eb481a21294a1dd9c01734f164" exitCode=1 Jan 29 17:09:23 crc kubenswrapper[4895]: I0129 17:09:23.360847 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9","Type":"ContainerDied","Data":"9b285f3e7e878670a11f81f94409822b57e3bcee9a2ea5b854ee1a5b9122d147"} Jan 29 17:09:23 crc kubenswrapper[4895]: I0129 17:09:23.360959 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9","Type":"ContainerDied","Data":"c196f0dbf4a82478841c50323448a81369edb3eb481a21294a1dd9c01734f164"} Jan 29 17:09:23 crc kubenswrapper[4895]: I0129 17:09:23.363539 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad356ea5-8184-46f6-b58d-399b0a742239","Type":"ContainerStarted","Data":"e15512b5681dac5432c053dcbb7f9991959472486bdbd3d5ee81c835f037a679"} Jan 29 17:09:23 crc kubenswrapper[4895]: I0129 17:09:23.392724 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9257456259999999 podStartE2EDuration="13.392701012s" podCreationTimestamp="2026-01-29 17:09:10 +0000 UTC" firstStartedPulling="2026-01-29 17:09:11.066675923 +0000 UTC m=+3434.869653187" lastFinishedPulling="2026-01-29 17:09:22.533631309 +0000 UTC m=+3446.336608573" observedRunningTime="2026-01-29 17:09:23.382297199 +0000 UTC m=+3447.185274463" watchObservedRunningTime="2026-01-29 17:09:23.392701012 +0000 UTC m=+3447.195678286" Jan 29 17:09:23 crc kubenswrapper[4895]: I0129 17:09:23.873781 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.042623 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-var-lib-manila\") pod \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.042713 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" (UID: "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.042897 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-ceph\") pod \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.043043 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6629\" (UniqueName: \"kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-kube-api-access-k6629\") pod \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.043454 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-combined-ca-bundle\") pod \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.043557 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-scripts\") pod \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.043651 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-etc-machine-id\") pod \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.043675 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data\") pod \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.043704 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data-custom\") pod \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\" (UID: \"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9\") " Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.043742 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" (UID: "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.044381 4895 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.044402 4895 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.049165 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-scripts" (OuterVolumeSpecName: "scripts") pod "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" (UID: "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.049228 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-kube-api-access-k6629" (OuterVolumeSpecName: "kube-api-access-k6629") pod "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" (UID: "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9"). InnerVolumeSpecName "kube-api-access-k6629". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.050301 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" (UID: "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.051355 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-ceph" (OuterVolumeSpecName: "ceph") pod "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" (UID: "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.105169 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" (UID: "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.147467 4895 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.147508 4895 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-ceph\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.147521 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6629\" (UniqueName: \"kubernetes.io/projected/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-kube-api-access-k6629\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.147533 4895 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.147542 4895 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.179833 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data" (OuterVolumeSpecName: "config-data") pod "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" (UID: "13fb791c-1978-45b7-8a1e-4e74f2b6bfd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.248217 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.374624 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"13fb791c-1978-45b7-8a1e-4e74f2b6bfd9","Type":"ContainerDied","Data":"2c85b848ce61180bab3d6ec4ae63acd0cee29af34d2717b90f8bf4334fa6c8d8"} Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.374688 4895 scope.go:117] "RemoveContainer" containerID="9b285f3e7e878670a11f81f94409822b57e3bcee9a2ea5b854ee1a5b9122d147" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.374910 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.374910 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.405935 4895 scope.go:117] "RemoveContainer" containerID="c196f0dbf4a82478841c50323448a81369edb3eb481a21294a1dd9c01734f164" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.419917 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.431261 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.460203 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 17:09:24 crc kubenswrapper[4895]: E0129 17:09:24.461060 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" containerName="manila-share" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.461086 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" containerName="manila-share" Jan 29 17:09:24 crc kubenswrapper[4895]: E0129 17:09:24.461126 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" containerName="probe" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.461135 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" containerName="probe" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.461348 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" containerName="manila-share" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.461371 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" containerName="probe" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.463044 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.472817 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.473827 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.674816 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-config-data\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.676291 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-ceph\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.676455 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.676484 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdgkh\" (UniqueName: \"kubernetes.io/projected/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-kube-api-access-cdgkh\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.676508 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.676768 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-scripts\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.676837 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.676937 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.779775 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-ceph\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.779893 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.779915 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.779931 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdgkh\" (UniqueName: \"kubernetes.io/projected/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-kube-api-access-cdgkh\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.780010 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-scripts\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.780052 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.780083 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.780113 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-config-data\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.784040 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-scripts\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.784123 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.784177 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.787081 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-config-data\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.791066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-ceph\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.791344 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.799489 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:24 crc kubenswrapper[4895]: I0129 17:09:24.807019 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdgkh\" (UniqueName: \"kubernetes.io/projected/c45486c1-31e6-47ac-94aa-5da2c0edbbaf-kube-api-access-cdgkh\") pod \"manila-share-share1-0\" (UID: \"c45486c1-31e6-47ac-94aa-5da2c0edbbaf\") " pod="openstack/manila-share-share1-0" Jan 29 17:09:25 crc kubenswrapper[4895]: I0129 17:09:25.050742 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13fb791c-1978-45b7-8a1e-4e74f2b6bfd9" path="/var/lib/kubelet/pods/13fb791c-1978-45b7-8a1e-4e74f2b6bfd9/volumes" Jan 29 17:09:25 crc kubenswrapper[4895]: I0129 17:09:25.090709 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 29 17:09:25 crc kubenswrapper[4895]: I0129 17:09:25.637746 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 29 17:09:25 crc kubenswrapper[4895]: W0129 17:09:25.638140 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc45486c1_31e6_47ac_94aa_5da2c0edbbaf.slice/crio-8e84098d4275988f675968b4d5ee27b62eb8b30ee5f2a1db78acd08bfe41147b WatchSource:0}: Error finding container 8e84098d4275988f675968b4d5ee27b62eb8b30ee5f2a1db78acd08bfe41147b: Status 404 returned error can't find the container with id 8e84098d4275988f675968b4d5ee27b62eb8b30ee5f2a1db78acd08bfe41147b Jan 29 17:09:26 crc kubenswrapper[4895]: I0129 17:09:26.397619 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c45486c1-31e6-47ac-94aa-5da2c0edbbaf","Type":"ContainerStarted","Data":"9d8fc717dfb3327e6d52bfe009713ab0f89785c34901ae0edc840f98764ec744"} Jan 29 17:09:26 crc kubenswrapper[4895]: I0129 17:09:26.397962 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c45486c1-31e6-47ac-94aa-5da2c0edbbaf","Type":"ContainerStarted","Data":"8e84098d4275988f675968b4d5ee27b62eb8b30ee5f2a1db78acd08bfe41147b"} Jan 29 17:09:27 crc kubenswrapper[4895]: I0129 17:09:27.409660 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c45486c1-31e6-47ac-94aa-5da2c0edbbaf","Type":"ContainerStarted","Data":"fdc65afde48ee0a599a21b9de77a89d4b4488c391eca0496f0f8817ec3358b46"} Jan 29 17:09:27 crc kubenswrapper[4895]: I0129 17:09:27.428116 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.428098026 podStartE2EDuration="3.428098026s" podCreationTimestamp="2026-01-29 17:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:09:27.426598186 +0000 UTC m=+3451.229575480" watchObservedRunningTime="2026-01-29 17:09:27.428098026 +0000 UTC m=+3451.231075310" Jan 29 17:09:28 crc kubenswrapper[4895]: E0129 17:09:28.040687 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:09:29 crc kubenswrapper[4895]: I0129 17:09:29.270514 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:29 crc kubenswrapper[4895]: I0129 17:09:29.270580 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:29 crc kubenswrapper[4895]: I0129 17:09:29.320633 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:29 crc kubenswrapper[4895]: I0129 17:09:29.476640 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:29 crc kubenswrapper[4895]: I0129 17:09:29.562657 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x2t6"] Jan 29 17:09:30 crc kubenswrapper[4895]: I0129 17:09:30.011266 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 29 17:09:31 crc kubenswrapper[4895]: E0129 17:09:31.039314 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:09:31 crc kubenswrapper[4895]: I0129 17:09:31.447084 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7x2t6" podUID="80355c10-d1b0-418c-a7b9-20366486946f" containerName="registry-server" containerID="cri-o://98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba" gracePeriod=2 Jan 29 17:09:31 crc kubenswrapper[4895]: I0129 17:09:31.850780 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.036240 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-utilities\") pod \"80355c10-d1b0-418c-a7b9-20366486946f\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.036477 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-catalog-content\") pod \"80355c10-d1b0-418c-a7b9-20366486946f\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.036577 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt2vl\" (UniqueName: \"kubernetes.io/projected/80355c10-d1b0-418c-a7b9-20366486946f-kube-api-access-xt2vl\") pod \"80355c10-d1b0-418c-a7b9-20366486946f\" (UID: \"80355c10-d1b0-418c-a7b9-20366486946f\") " Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.037560 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-utilities" (OuterVolumeSpecName: "utilities") pod "80355c10-d1b0-418c-a7b9-20366486946f" (UID: "80355c10-d1b0-418c-a7b9-20366486946f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.052273 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80355c10-d1b0-418c-a7b9-20366486946f-kube-api-access-xt2vl" (OuterVolumeSpecName: "kube-api-access-xt2vl") pod "80355c10-d1b0-418c-a7b9-20366486946f" (UID: "80355c10-d1b0-418c-a7b9-20366486946f"). InnerVolumeSpecName "kube-api-access-xt2vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.088654 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80355c10-d1b0-418c-a7b9-20366486946f" (UID: "80355c10-d1b0-418c-a7b9-20366486946f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.140092 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.140136 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt2vl\" (UniqueName: \"kubernetes.io/projected/80355c10-d1b0-418c-a7b9-20366486946f-kube-api-access-xt2vl\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.140151 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80355c10-d1b0-418c-a7b9-20366486946f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.458914 4895 generic.go:334] "Generic (PLEG): container finished" podID="80355c10-d1b0-418c-a7b9-20366486946f" containerID="98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba" exitCode=0 Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.459020 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x2t6" event={"ID":"80355c10-d1b0-418c-a7b9-20366486946f","Type":"ContainerDied","Data":"98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba"} Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.459339 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x2t6" event={"ID":"80355c10-d1b0-418c-a7b9-20366486946f","Type":"ContainerDied","Data":"5c114ab9ceb4ed25d180ed7bd66883827dd59d9d62f8bf505d673702f8b18693"} Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.459046 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x2t6" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.459411 4895 scope.go:117] "RemoveContainer" containerID="98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.487835 4895 scope.go:117] "RemoveContainer" containerID="1dd87a45cbd8f8e1c4126bf99c482fda0c47ada0f3fbf3ff9527e261e15748fb" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.498886 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x2t6"] Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.508838 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x2t6"] Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.515941 4895 scope.go:117] "RemoveContainer" containerID="953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.561993 4895 scope.go:117] "RemoveContainer" containerID="98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba" Jan 29 17:09:32 crc kubenswrapper[4895]: E0129 17:09:32.562617 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba\": container with ID starting with 98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba not found: ID does not exist" containerID="98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.562714 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba"} err="failed to get container status \"98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba\": rpc error: code = NotFound desc = could not find container \"98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba\": container with ID starting with 98f0341fa74620465287c73f2bd479ee18a8249e750a96d2073da639b7f6d9ba not found: ID does not exist" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.562775 4895 scope.go:117] "RemoveContainer" containerID="1dd87a45cbd8f8e1c4126bf99c482fda0c47ada0f3fbf3ff9527e261e15748fb" Jan 29 17:09:32 crc kubenswrapper[4895]: E0129 17:09:32.563448 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dd87a45cbd8f8e1c4126bf99c482fda0c47ada0f3fbf3ff9527e261e15748fb\": container with ID starting with 1dd87a45cbd8f8e1c4126bf99c482fda0c47ada0f3fbf3ff9527e261e15748fb not found: ID does not exist" containerID="1dd87a45cbd8f8e1c4126bf99c482fda0c47ada0f3fbf3ff9527e261e15748fb" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.563506 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dd87a45cbd8f8e1c4126bf99c482fda0c47ada0f3fbf3ff9527e261e15748fb"} err="failed to get container status \"1dd87a45cbd8f8e1c4126bf99c482fda0c47ada0f3fbf3ff9527e261e15748fb\": rpc error: code = NotFound desc = could not find container \"1dd87a45cbd8f8e1c4126bf99c482fda0c47ada0f3fbf3ff9527e261e15748fb\": container with ID starting with 1dd87a45cbd8f8e1c4126bf99c482fda0c47ada0f3fbf3ff9527e261e15748fb not found: ID does not exist" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.563544 4895 scope.go:117] "RemoveContainer" containerID="953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8" Jan 29 17:09:32 crc kubenswrapper[4895]: E0129 17:09:32.563860 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8\": container with ID starting with 953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8 not found: ID does not exist" containerID="953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8" Jan 29 17:09:32 crc kubenswrapper[4895]: I0129 17:09:32.563912 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8"} err="failed to get container status \"953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8\": rpc error: code = NotFound desc = could not find container \"953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8\": container with ID starting with 953a2e5de3087d23819f11d0c8745e483992aa8d5eb68206e803e999e5b7d9a8 not found: ID does not exist" Jan 29 17:09:33 crc kubenswrapper[4895]: I0129 17:09:33.050297 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80355c10-d1b0-418c-a7b9-20366486946f" path="/var/lib/kubelet/pods/80355c10-d1b0-418c-a7b9-20366486946f/volumes" Jan 29 17:09:35 crc kubenswrapper[4895]: I0129 17:09:35.040603 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:09:35 crc kubenswrapper[4895]: I0129 17:09:35.091454 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 29 17:09:36 crc kubenswrapper[4895]: I0129 17:09:36.497421 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9dxd" event={"ID":"29e9ec80-fcd0-4eca-8c96-01a531355911","Type":"ContainerStarted","Data":"6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a"} Jan 29 17:09:37 crc kubenswrapper[4895]: I0129 17:09:37.510146 4895 generic.go:334] "Generic (PLEG): container finished" podID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerID="6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a" exitCode=0 Jan 29 17:09:37 crc kubenswrapper[4895]: I0129 17:09:37.510223 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9dxd" event={"ID":"29e9ec80-fcd0-4eca-8c96-01a531355911","Type":"ContainerDied","Data":"6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a"} Jan 29 17:09:39 crc kubenswrapper[4895]: I0129 17:09:39.531090 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9dxd" event={"ID":"29e9ec80-fcd0-4eca-8c96-01a531355911","Type":"ContainerStarted","Data":"4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3"} Jan 29 17:09:39 crc kubenswrapper[4895]: I0129 17:09:39.547956 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c9dxd" podStartSLOduration=2.872444246 podStartE2EDuration="11m7.547938159s" podCreationTimestamp="2026-01-29 16:58:32 +0000 UTC" firstStartedPulling="2026-01-29 16:58:33.415947025 +0000 UTC m=+2797.218924289" lastFinishedPulling="2026-01-29 17:09:38.091440938 +0000 UTC m=+3461.894418202" observedRunningTime="2026-01-29 17:09:39.546202562 +0000 UTC m=+3463.349179846" watchObservedRunningTime="2026-01-29 17:09:39.547938159 +0000 UTC m=+3463.350915423" Jan 29 17:09:40 crc kubenswrapper[4895]: E0129 17:09:40.039045 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" Jan 29 17:09:40 crc kubenswrapper[4895]: I0129 17:09:40.562016 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 17:09:41 crc kubenswrapper[4895]: I0129 17:09:41.547248 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 29 17:09:42 crc kubenswrapper[4895]: I0129 17:09:42.429156 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 17:09:42 crc kubenswrapper[4895]: I0129 17:09:42.430175 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 17:09:43 crc kubenswrapper[4895]: I0129 17:09:43.482741 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerName="registry-server" probeResult="failure" output=< Jan 29 17:09:43 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 17:09:43 crc kubenswrapper[4895]: > Jan 29 17:09:45 crc kubenswrapper[4895]: E0129 17:09:45.039500 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" Jan 29 17:09:46 crc kubenswrapper[4895]: I0129 17:09:46.658800 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 29 17:09:52 crc kubenswrapper[4895]: I0129 17:09:52.477327 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 17:09:52 crc kubenswrapper[4895]: I0129 17:09:52.528119 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 17:09:52 crc kubenswrapper[4895]: I0129 17:09:52.715086 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c9dxd"] Jan 29 17:09:53 crc kubenswrapper[4895]: I0129 17:09:53.656900 4895 generic.go:334] "Generic (PLEG): container finished" podID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" containerID="4941baf33bdad83ef9aa21f6d8aff0b4261f275a558a2ebea87dd87e48b3ab56" exitCode=0 Jan 29 17:09:53 crc kubenswrapper[4895]: I0129 17:09:53.657466 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c9dxd" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerName="registry-server" containerID="cri-o://4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3" gracePeriod=2 Jan 29 17:09:53 crc kubenswrapper[4895]: I0129 17:09:53.657068 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zbgz" event={"ID":"6337deb0-d51e-4fa1-8aab-24cebc2988c2","Type":"ContainerDied","Data":"4941baf33bdad83ef9aa21f6d8aff0b4261f275a558a2ebea87dd87e48b3ab56"} Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.160448 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.315864 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8r64\" (UniqueName: \"kubernetes.io/projected/29e9ec80-fcd0-4eca-8c96-01a531355911-kube-api-access-t8r64\") pod \"29e9ec80-fcd0-4eca-8c96-01a531355911\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.316411 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-catalog-content\") pod \"29e9ec80-fcd0-4eca-8c96-01a531355911\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.316558 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-utilities\") pod \"29e9ec80-fcd0-4eca-8c96-01a531355911\" (UID: \"29e9ec80-fcd0-4eca-8c96-01a531355911\") " Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.318052 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-utilities" (OuterVolumeSpecName: "utilities") pod "29e9ec80-fcd0-4eca-8c96-01a531355911" (UID: "29e9ec80-fcd0-4eca-8c96-01a531355911"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.329245 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e9ec80-fcd0-4eca-8c96-01a531355911-kube-api-access-t8r64" (OuterVolumeSpecName: "kube-api-access-t8r64") pod "29e9ec80-fcd0-4eca-8c96-01a531355911" (UID: "29e9ec80-fcd0-4eca-8c96-01a531355911"). InnerVolumeSpecName "kube-api-access-t8r64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.418417 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8r64\" (UniqueName: \"kubernetes.io/projected/29e9ec80-fcd0-4eca-8c96-01a531355911-kube-api-access-t8r64\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.418457 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.427687 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29e9ec80-fcd0-4eca-8c96-01a531355911" (UID: "29e9ec80-fcd0-4eca-8c96-01a531355911"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.522712 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e9ec80-fcd0-4eca-8c96-01a531355911-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.669745 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zbgz" event={"ID":"6337deb0-d51e-4fa1-8aab-24cebc2988c2","Type":"ContainerStarted","Data":"84a5dc681c5ee82149e47a30e6644a49381dd85e36ea8140ebdd145ad1c51890"} Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.673604 4895 generic.go:334] "Generic (PLEG): container finished" podID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerID="4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3" exitCode=0 Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.673653 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9dxd" event={"ID":"29e9ec80-fcd0-4eca-8c96-01a531355911","Type":"ContainerDied","Data":"4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3"} Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.673673 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9dxd" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.673686 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9dxd" event={"ID":"29e9ec80-fcd0-4eca-8c96-01a531355911","Type":"ContainerDied","Data":"470a47a5ae21df9b5776f7e2d893a33fcd1c9a1378cde95f913035fb53ac4644"} Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.673707 4895 scope.go:117] "RemoveContainer" containerID="4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.693448 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2zbgz" podStartSLOduration=3.318586936 podStartE2EDuration="11m1.693423815s" podCreationTimestamp="2026-01-29 16:58:53 +0000 UTC" firstStartedPulling="2026-01-29 16:58:55.670777409 +0000 UTC m=+2819.473754673" lastFinishedPulling="2026-01-29 17:09:54.045614288 +0000 UTC m=+3477.848591552" observedRunningTime="2026-01-29 17:09:54.689345485 +0000 UTC m=+3478.492322759" watchObservedRunningTime="2026-01-29 17:09:54.693423815 +0000 UTC m=+3478.496401079" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.695825 4895 scope.go:117] "RemoveContainer" containerID="6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.719980 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c9dxd"] Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.728715 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c9dxd"] Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.752029 4895 scope.go:117] "RemoveContainer" containerID="f509660da55fdc56d583eebc1bdebf15b3c280e6e4dbe29d0711b46bb9c11359" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.776955 4895 scope.go:117] "RemoveContainer" containerID="4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3" Jan 29 17:09:54 crc kubenswrapper[4895]: E0129 17:09:54.777948 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3\": container with ID starting with 4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3 not found: ID does not exist" containerID="4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.778011 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3"} err="failed to get container status \"4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3\": rpc error: code = NotFound desc = could not find container \"4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3\": container with ID starting with 4889b7c0619252f46883830ca9d62441135cb0b0a2a333e17e0e7b1e1ebb50c3 not found: ID does not exist" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.778046 4895 scope.go:117] "RemoveContainer" containerID="6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a" Jan 29 17:09:54 crc kubenswrapper[4895]: E0129 17:09:54.778572 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a\": container with ID starting with 6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a not found: ID does not exist" containerID="6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.778609 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a"} err="failed to get container status \"6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a\": rpc error: code = NotFound desc = could not find container \"6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a\": container with ID starting with 6cbfad6ad52e4107cae71530263b06f08038eae10f26bee3227aca5c881c1d5a not found: ID does not exist" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.778640 4895 scope.go:117] "RemoveContainer" containerID="f509660da55fdc56d583eebc1bdebf15b3c280e6e4dbe29d0711b46bb9c11359" Jan 29 17:09:54 crc kubenswrapper[4895]: E0129 17:09:54.779219 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f509660da55fdc56d583eebc1bdebf15b3c280e6e4dbe29d0711b46bb9c11359\": container with ID starting with f509660da55fdc56d583eebc1bdebf15b3c280e6e4dbe29d0711b46bb9c11359 not found: ID does not exist" containerID="f509660da55fdc56d583eebc1bdebf15b3c280e6e4dbe29d0711b46bb9c11359" Jan 29 17:09:54 crc kubenswrapper[4895]: I0129 17:09:54.779296 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f509660da55fdc56d583eebc1bdebf15b3c280e6e4dbe29d0711b46bb9c11359"} err="failed to get container status \"f509660da55fdc56d583eebc1bdebf15b3c280e6e4dbe29d0711b46bb9c11359\": rpc error: code = NotFound desc = could not find container \"f509660da55fdc56d583eebc1bdebf15b3c280e6e4dbe29d0711b46bb9c11359\": container with ID starting with f509660da55fdc56d583eebc1bdebf15b3c280e6e4dbe29d0711b46bb9c11359 not found: ID does not exist" Jan 29 17:09:55 crc kubenswrapper[4895]: I0129 17:09:55.047747 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" path="/var/lib/kubelet/pods/29e9ec80-fcd0-4eca-8c96-01a531355911/volumes" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.532126 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rdwzt"] Jan 29 17:09:56 crc kubenswrapper[4895]: E0129 17:09:56.532645 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80355c10-d1b0-418c-a7b9-20366486946f" containerName="extract-utilities" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.532662 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="80355c10-d1b0-418c-a7b9-20366486946f" containerName="extract-utilities" Jan 29 17:09:56 crc kubenswrapper[4895]: E0129 17:09:56.532677 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerName="extract-utilities" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.532683 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerName="extract-utilities" Jan 29 17:09:56 crc kubenswrapper[4895]: E0129 17:09:56.532696 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerName="registry-server" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.532703 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerName="registry-server" Jan 29 17:09:56 crc kubenswrapper[4895]: E0129 17:09:56.532717 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80355c10-d1b0-418c-a7b9-20366486946f" containerName="extract-content" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.532724 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="80355c10-d1b0-418c-a7b9-20366486946f" containerName="extract-content" Jan 29 17:09:56 crc kubenswrapper[4895]: E0129 17:09:56.532750 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80355c10-d1b0-418c-a7b9-20366486946f" containerName="registry-server" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.532756 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="80355c10-d1b0-418c-a7b9-20366486946f" containerName="registry-server" Jan 29 17:09:56 crc kubenswrapper[4895]: E0129 17:09:56.532770 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerName="extract-content" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.532776 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerName="extract-content" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.533010 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="80355c10-d1b0-418c-a7b9-20366486946f" containerName="registry-server" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.533020 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e9ec80-fcd0-4eca-8c96-01a531355911" containerName="registry-server" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.534402 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.545952 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rdwzt"] Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.565242 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-utilities\") pod \"redhat-operators-rdwzt\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.565673 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl89q\" (UniqueName: \"kubernetes.io/projected/9bc9e239-b65e-4860-9b32-8c2827e9c12a-kube-api-access-gl89q\") pod \"redhat-operators-rdwzt\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.565706 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-catalog-content\") pod \"redhat-operators-rdwzt\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.669288 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl89q\" (UniqueName: \"kubernetes.io/projected/9bc9e239-b65e-4860-9b32-8c2827e9c12a-kube-api-access-gl89q\") pod \"redhat-operators-rdwzt\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.669758 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-catalog-content\") pod \"redhat-operators-rdwzt\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.669892 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-utilities\") pod \"redhat-operators-rdwzt\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.670387 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-catalog-content\") pod \"redhat-operators-rdwzt\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.670431 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-utilities\") pod \"redhat-operators-rdwzt\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.689546 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl89q\" (UniqueName: \"kubernetes.io/projected/9bc9e239-b65e-4860-9b32-8c2827e9c12a-kube-api-access-gl89q\") pod \"redhat-operators-rdwzt\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:09:56 crc kubenswrapper[4895]: I0129 17:09:56.871288 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:09:57 crc kubenswrapper[4895]: I0129 17:09:57.704983 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"14f81c9c-0e13-446b-a525-370c39259440","Type":"ContainerStarted","Data":"ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861"} Jan 29 17:09:57 crc kubenswrapper[4895]: I0129 17:09:57.827138 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rdwzt"] Jan 29 17:09:58 crc kubenswrapper[4895]: I0129 17:09:58.726346 4895 generic.go:334] "Generic (PLEG): container finished" podID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerID="8f0e73d878dc72fe254efc1238170b4c58cc7532655cfb88c55961a04878fac9" exitCode=0 Jan 29 17:09:58 crc kubenswrapper[4895]: I0129 17:09:58.726449 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdwzt" event={"ID":"9bc9e239-b65e-4860-9b32-8c2827e9c12a","Type":"ContainerDied","Data":"8f0e73d878dc72fe254efc1238170b4c58cc7532655cfb88c55961a04878fac9"} Jan 29 17:09:58 crc kubenswrapper[4895]: I0129 17:09:58.727065 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdwzt" event={"ID":"9bc9e239-b65e-4860-9b32-8c2827e9c12a","Type":"ContainerStarted","Data":"adc50ba0acc5eaeb5fa279883322284091246cd7df4e3cb93f296c2d8fc3d20f"} Jan 29 17:09:58 crc kubenswrapper[4895]: I0129 17:09:58.730796 4895 generic.go:334] "Generic (PLEG): container finished" podID="14f81c9c-0e13-446b-a525-370c39259440" containerID="ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861" exitCode=0 Jan 29 17:09:58 crc kubenswrapper[4895]: I0129 17:09:58.730887 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"14f81c9c-0e13-446b-a525-370c39259440","Type":"ContainerDied","Data":"ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861"} Jan 29 17:10:00 crc kubenswrapper[4895]: I0129 17:10:00.752573 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"14f81c9c-0e13-446b-a525-370c39259440","Type":"ContainerStarted","Data":"5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887"} Jan 29 17:10:00 crc kubenswrapper[4895]: I0129 17:10:00.781852 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8s2lq" podStartSLOduration=3.583901982 podStartE2EDuration="10m52.781836256s" podCreationTimestamp="2026-01-29 16:59:08 +0000 UTC" firstStartedPulling="2026-01-29 16:59:10.834118497 +0000 UTC m=+2834.637095761" lastFinishedPulling="2026-01-29 17:10:00.032052771 +0000 UTC m=+3483.835030035" observedRunningTime="2026-01-29 17:10:00.775821403 +0000 UTC m=+3484.578798667" watchObservedRunningTime="2026-01-29 17:10:00.781836256 +0000 UTC m=+3484.584813520" Jan 29 17:10:01 crc kubenswrapper[4895]: I0129 17:10:01.762845 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdwzt" event={"ID":"9bc9e239-b65e-4860-9b32-8c2827e9c12a","Type":"ContainerStarted","Data":"6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48"} Jan 29 17:10:02 crc kubenswrapper[4895]: I0129 17:10:02.779061 4895 generic.go:334] "Generic (PLEG): container finished" podID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerID="6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48" exitCode=0 Jan 29 17:10:02 crc kubenswrapper[4895]: I0129 17:10:02.779117 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdwzt" event={"ID":"9bc9e239-b65e-4860-9b32-8c2827e9c12a","Type":"ContainerDied","Data":"6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48"} Jan 29 17:10:03 crc kubenswrapper[4895]: I0129 17:10:03.918576 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2zbgz" Jan 29 17:10:03 crc kubenswrapper[4895]: I0129 17:10:03.919029 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2zbgz" Jan 29 17:10:03 crc kubenswrapper[4895]: I0129 17:10:03.977750 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2zbgz" Jan 29 17:10:04 crc kubenswrapper[4895]: I0129 17:10:04.864602 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2zbgz" Jan 29 17:10:06 crc kubenswrapper[4895]: I0129 17:10:06.315606 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2zbgz"] Jan 29 17:10:06 crc kubenswrapper[4895]: I0129 17:10:06.812449 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2zbgz" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" containerName="registry-server" containerID="cri-o://84a5dc681c5ee82149e47a30e6644a49381dd85e36ea8140ebdd145ad1c51890" gracePeriod=2 Jan 29 17:10:09 crc kubenswrapper[4895]: I0129 17:10:09.148478 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 17:10:09 crc kubenswrapper[4895]: I0129 17:10:09.148796 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 17:10:09 crc kubenswrapper[4895]: I0129 17:10:09.201362 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 17:10:09 crc kubenswrapper[4895]: I0129 17:10:09.877959 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 17:10:10 crc kubenswrapper[4895]: I0129 17:10:10.912798 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Jan 29 17:10:13 crc kubenswrapper[4895]: I0129 17:10:13.539694 4895 generic.go:334] "Generic (PLEG): container finished" podID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" containerID="84a5dc681c5ee82149e47a30e6644a49381dd85e36ea8140ebdd145ad1c51890" exitCode=0 Jan 29 17:10:13 crc kubenswrapper[4895]: I0129 17:10:13.539766 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zbgz" event={"ID":"6337deb0-d51e-4fa1-8aab-24cebc2988c2","Type":"ContainerDied","Data":"84a5dc681c5ee82149e47a30e6644a49381dd85e36ea8140ebdd145ad1c51890"} Jan 29 17:10:13 crc kubenswrapper[4895]: I0129 17:10:13.911844 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zbgz" Jan 29 17:10:13 crc kubenswrapper[4895]: I0129 17:10:13.950861 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj5zt\" (UniqueName: \"kubernetes.io/projected/6337deb0-d51e-4fa1-8aab-24cebc2988c2-kube-api-access-jj5zt\") pod \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " Jan 29 17:10:13 crc kubenswrapper[4895]: I0129 17:10:13.951005 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-catalog-content\") pod \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " Jan 29 17:10:13 crc kubenswrapper[4895]: I0129 17:10:13.951145 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-utilities\") pod \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\" (UID: \"6337deb0-d51e-4fa1-8aab-24cebc2988c2\") " Jan 29 17:10:13 crc kubenswrapper[4895]: I0129 17:10:13.952363 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-utilities" (OuterVolumeSpecName: "utilities") pod "6337deb0-d51e-4fa1-8aab-24cebc2988c2" (UID: "6337deb0-d51e-4fa1-8aab-24cebc2988c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:13 crc kubenswrapper[4895]: I0129 17:10:13.963388 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6337deb0-d51e-4fa1-8aab-24cebc2988c2-kube-api-access-jj5zt" (OuterVolumeSpecName: "kube-api-access-jj5zt") pod "6337deb0-d51e-4fa1-8aab-24cebc2988c2" (UID: "6337deb0-d51e-4fa1-8aab-24cebc2988c2"). InnerVolumeSpecName "kube-api-access-jj5zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.015689 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6337deb0-d51e-4fa1-8aab-24cebc2988c2" (UID: "6337deb0-d51e-4fa1-8aab-24cebc2988c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.054208 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj5zt\" (UniqueName: \"kubernetes.io/projected/6337deb0-d51e-4fa1-8aab-24cebc2988c2-kube-api-access-jj5zt\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.054245 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.054254 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6337deb0-d51e-4fa1-8aab-24cebc2988c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.551045 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zbgz" event={"ID":"6337deb0-d51e-4fa1-8aab-24cebc2988c2","Type":"ContainerDied","Data":"d6d9beee1e716c2040c990ee4a283de2f73514adea37abfbf39984d05d7d14d2"} Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.551305 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zbgz" Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.551416 4895 scope.go:117] "RemoveContainer" containerID="84a5dc681c5ee82149e47a30e6644a49381dd85e36ea8140ebdd145ad1c51890" Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.557218 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8s2lq" podUID="14f81c9c-0e13-446b-a525-370c39259440" containerName="registry-server" containerID="cri-o://5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887" gracePeriod=2 Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.557855 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdwzt" event={"ID":"9bc9e239-b65e-4860-9b32-8c2827e9c12a","Type":"ContainerStarted","Data":"190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5"} Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.585890 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rdwzt" podStartSLOduration=3.649069583 podStartE2EDuration="18.585858774s" podCreationTimestamp="2026-01-29 17:09:56 +0000 UTC" firstStartedPulling="2026-01-29 17:09:58.730561737 +0000 UTC m=+3482.533539001" lastFinishedPulling="2026-01-29 17:10:13.667350918 +0000 UTC m=+3497.470328192" observedRunningTime="2026-01-29 17:10:14.581675101 +0000 UTC m=+3498.384652365" watchObservedRunningTime="2026-01-29 17:10:14.585858774 +0000 UTC m=+3498.388836038" Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.589255 4895 scope.go:117] "RemoveContainer" containerID="4941baf33bdad83ef9aa21f6d8aff0b4261f275a558a2ebea87dd87e48b3ab56" Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.611762 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2zbgz"] Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.616425 4895 scope.go:117] "RemoveContainer" containerID="7a68ee92b965b04e91237cf617faf70bf18eb48d60049aaea394d183b7919922" Jan 29 17:10:14 crc kubenswrapper[4895]: I0129 17:10:14.619666 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2zbgz"] Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.049938 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" path="/var/lib/kubelet/pods/6337deb0-d51e-4fa1-8aab-24cebc2988c2/volumes" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.074761 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.180643 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7xwj\" (UniqueName: \"kubernetes.io/projected/14f81c9c-0e13-446b-a525-370c39259440-kube-api-access-q7xwj\") pod \"14f81c9c-0e13-446b-a525-370c39259440\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.180794 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-utilities\") pod \"14f81c9c-0e13-446b-a525-370c39259440\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.180827 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-catalog-content\") pod \"14f81c9c-0e13-446b-a525-370c39259440\" (UID: \"14f81c9c-0e13-446b-a525-370c39259440\") " Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.186454 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-utilities" (OuterVolumeSpecName: "utilities") pod "14f81c9c-0e13-446b-a525-370c39259440" (UID: "14f81c9c-0e13-446b-a525-370c39259440"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.187136 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f81c9c-0e13-446b-a525-370c39259440-kube-api-access-q7xwj" (OuterVolumeSpecName: "kube-api-access-q7xwj") pod "14f81c9c-0e13-446b-a525-370c39259440" (UID: "14f81c9c-0e13-446b-a525-370c39259440"). InnerVolumeSpecName "kube-api-access-q7xwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.231851 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14f81c9c-0e13-446b-a525-370c39259440" (UID: "14f81c9c-0e13-446b-a525-370c39259440"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.283473 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7xwj\" (UniqueName: \"kubernetes.io/projected/14f81c9c-0e13-446b-a525-370c39259440-kube-api-access-q7xwj\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.283518 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.283529 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14f81c9c-0e13-446b-a525-370c39259440-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.575275 4895 generic.go:334] "Generic (PLEG): container finished" podID="14f81c9c-0e13-446b-a525-370c39259440" containerID="5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887" exitCode=0 Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.575389 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8s2lq" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.575391 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"14f81c9c-0e13-446b-a525-370c39259440","Type":"ContainerDied","Data":"5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887"} Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.575463 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8s2lq" event={"ID":"14f81c9c-0e13-446b-a525-370c39259440","Type":"ContainerDied","Data":"f00ad1f5c64bea0fd1148495f5a41a5ca85a7503accb01d56713056615748797"} Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.575497 4895 scope.go:117] "RemoveContainer" containerID="5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.603543 4895 scope.go:117] "RemoveContainer" containerID="ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.612081 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.621145 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8s2lq"] Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.624938 4895 scope.go:117] "RemoveContainer" containerID="28dcc3d50de794ccfcca70781bb83143642b8951a754af9eeedf02e40eb97d3b" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.667324 4895 scope.go:117] "RemoveContainer" containerID="5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887" Jan 29 17:10:15 crc kubenswrapper[4895]: E0129 17:10:15.667839 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887\": container with ID starting with 5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887 not found: ID does not exist" containerID="5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.667914 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887"} err="failed to get container status \"5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887\": rpc error: code = NotFound desc = could not find container \"5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887\": container with ID starting with 5cc5c2e47ceba650fa6594d5658a185e8667177e9cae363fe766856d94616887 not found: ID does not exist" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.667952 4895 scope.go:117] "RemoveContainer" containerID="ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861" Jan 29 17:10:15 crc kubenswrapper[4895]: E0129 17:10:15.668470 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861\": container with ID starting with ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861 not found: ID does not exist" containerID="ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.668500 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861"} err="failed to get container status \"ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861\": rpc error: code = NotFound desc = could not find container \"ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861\": container with ID starting with ba7c6cfe87c1dd5cdb19a3b1d92066f87ac0375e665d253e4e451c7bdbf63861 not found: ID does not exist" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.668522 4895 scope.go:117] "RemoveContainer" containerID="28dcc3d50de794ccfcca70781bb83143642b8951a754af9eeedf02e40eb97d3b" Jan 29 17:10:15 crc kubenswrapper[4895]: E0129 17:10:15.668980 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28dcc3d50de794ccfcca70781bb83143642b8951a754af9eeedf02e40eb97d3b\": container with ID starting with 28dcc3d50de794ccfcca70781bb83143642b8951a754af9eeedf02e40eb97d3b not found: ID does not exist" containerID="28dcc3d50de794ccfcca70781bb83143642b8951a754af9eeedf02e40eb97d3b" Jan 29 17:10:15 crc kubenswrapper[4895]: I0129 17:10:15.669017 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28dcc3d50de794ccfcca70781bb83143642b8951a754af9eeedf02e40eb97d3b"} err="failed to get container status \"28dcc3d50de794ccfcca70781bb83143642b8951a754af9eeedf02e40eb97d3b\": rpc error: code = NotFound desc = could not find container \"28dcc3d50de794ccfcca70781bb83143642b8951a754af9eeedf02e40eb97d3b\": container with ID starting with 28dcc3d50de794ccfcca70781bb83143642b8951a754af9eeedf02e40eb97d3b not found: ID does not exist" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.335922 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l8nck"] Jan 29 17:10:16 crc kubenswrapper[4895]: E0129 17:10:16.337143 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f81c9c-0e13-446b-a525-370c39259440" containerName="extract-content" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.337180 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f81c9c-0e13-446b-a525-370c39259440" containerName="extract-content" Jan 29 17:10:16 crc kubenswrapper[4895]: E0129 17:10:16.337219 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" containerName="registry-server" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.337234 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" containerName="registry-server" Jan 29 17:10:16 crc kubenswrapper[4895]: E0129 17:10:16.337252 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f81c9c-0e13-446b-a525-370c39259440" containerName="extract-utilities" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.337267 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f81c9c-0e13-446b-a525-370c39259440" containerName="extract-utilities" Jan 29 17:10:16 crc kubenswrapper[4895]: E0129 17:10:16.337300 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" containerName="extract-content" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.337313 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" containerName="extract-content" Jan 29 17:10:16 crc kubenswrapper[4895]: E0129 17:10:16.337330 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" containerName="extract-utilities" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.337342 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" containerName="extract-utilities" Jan 29 17:10:16 crc kubenswrapper[4895]: E0129 17:10:16.337367 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f81c9c-0e13-446b-a525-370c39259440" containerName="registry-server" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.337379 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f81c9c-0e13-446b-a525-370c39259440" containerName="registry-server" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.337798 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f81c9c-0e13-446b-a525-370c39259440" containerName="registry-server" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.337859 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="6337deb0-d51e-4fa1-8aab-24cebc2988c2" containerName="registry-server" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.341400 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.354154 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l8nck"] Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.406377 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-catalog-content\") pod \"community-operators-l8nck\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.406794 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-utilities\") pod \"community-operators-l8nck\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.406850 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wvk4\" (UniqueName: \"kubernetes.io/projected/250733c1-8f74-4b98-a527-1111914407e6-kube-api-access-5wvk4\") pod \"community-operators-l8nck\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.508798 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-catalog-content\") pod \"community-operators-l8nck\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.508909 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-utilities\") pod \"community-operators-l8nck\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.508932 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wvk4\" (UniqueName: \"kubernetes.io/projected/250733c1-8f74-4b98-a527-1111914407e6-kube-api-access-5wvk4\") pod \"community-operators-l8nck\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.509418 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-catalog-content\") pod \"community-operators-l8nck\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.509448 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-utilities\") pod \"community-operators-l8nck\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.532307 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wvk4\" (UniqueName: \"kubernetes.io/projected/250733c1-8f74-4b98-a527-1111914407e6-kube-api-access-5wvk4\") pod \"community-operators-l8nck\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.662919 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.875622 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:10:16 crc kubenswrapper[4895]: I0129 17:10:16.876249 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:10:17 crc kubenswrapper[4895]: I0129 17:10:17.047844 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f81c9c-0e13-446b-a525-370c39259440" path="/var/lib/kubelet/pods/14f81c9c-0e13-446b-a525-370c39259440/volumes" Jan 29 17:10:17 crc kubenswrapper[4895]: I0129 17:10:17.256223 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l8nck"] Jan 29 17:10:17 crc kubenswrapper[4895]: W0129 17:10:17.260517 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod250733c1_8f74_4b98_a527_1111914407e6.slice/crio-99d727662c7163e8984dda73ac1ac842a092011dffe3f1685037382bc6bdbb4d WatchSource:0}: Error finding container 99d727662c7163e8984dda73ac1ac842a092011dffe3f1685037382bc6bdbb4d: Status 404 returned error can't find the container with id 99d727662c7163e8984dda73ac1ac842a092011dffe3f1685037382bc6bdbb4d Jan 29 17:10:17 crc kubenswrapper[4895]: I0129 17:10:17.596638 4895 generic.go:334] "Generic (PLEG): container finished" podID="250733c1-8f74-4b98-a527-1111914407e6" containerID="a911fe10545555e93dd5494f3a1cb5301e37e2eb72364d6e8a9c4d660ff4999d" exitCode=0 Jan 29 17:10:17 crc kubenswrapper[4895]: I0129 17:10:17.596745 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8nck" event={"ID":"250733c1-8f74-4b98-a527-1111914407e6","Type":"ContainerDied","Data":"a911fe10545555e93dd5494f3a1cb5301e37e2eb72364d6e8a9c4d660ff4999d"} Jan 29 17:10:17 crc kubenswrapper[4895]: I0129 17:10:17.597085 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8nck" event={"ID":"250733c1-8f74-4b98-a527-1111914407e6","Type":"ContainerStarted","Data":"99d727662c7163e8984dda73ac1ac842a092011dffe3f1685037382bc6bdbb4d"} Jan 29 17:10:17 crc kubenswrapper[4895]: I0129 17:10:17.947100 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rdwzt" podUID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerName="registry-server" probeResult="failure" output=< Jan 29 17:10:17 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 17:10:17 crc kubenswrapper[4895]: > Jan 29 17:10:18 crc kubenswrapper[4895]: I0129 17:10:18.728949 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kfs2b"] Jan 29 17:10:18 crc kubenswrapper[4895]: I0129 17:10:18.731934 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:18 crc kubenswrapper[4895]: I0129 17:10:18.744411 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kfs2b"] Jan 29 17:10:18 crc kubenswrapper[4895]: I0129 17:10:18.877850 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-catalog-content\") pod \"certified-operators-kfs2b\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:18 crc kubenswrapper[4895]: I0129 17:10:18.878023 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgpl\" (UniqueName: \"kubernetes.io/projected/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-kube-api-access-2jgpl\") pod \"certified-operators-kfs2b\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:18 crc kubenswrapper[4895]: I0129 17:10:18.878078 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-utilities\") pod \"certified-operators-kfs2b\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:18 crc kubenswrapper[4895]: I0129 17:10:18.979545 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jgpl\" (UniqueName: \"kubernetes.io/projected/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-kube-api-access-2jgpl\") pod \"certified-operators-kfs2b\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:18 crc kubenswrapper[4895]: I0129 17:10:18.979659 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-utilities\") pod \"certified-operators-kfs2b\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:18 crc kubenswrapper[4895]: I0129 17:10:18.979746 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-catalog-content\") pod \"certified-operators-kfs2b\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:18 crc kubenswrapper[4895]: I0129 17:10:18.980247 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-utilities\") pod \"certified-operators-kfs2b\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:18 crc kubenswrapper[4895]: I0129 17:10:18.980304 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-catalog-content\") pod \"certified-operators-kfs2b\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:19 crc kubenswrapper[4895]: I0129 17:10:19.005569 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jgpl\" (UniqueName: \"kubernetes.io/projected/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-kube-api-access-2jgpl\") pod \"certified-operators-kfs2b\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:19 crc kubenswrapper[4895]: I0129 17:10:19.067907 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:19 crc kubenswrapper[4895]: I0129 17:10:19.610484 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kfs2b"] Jan 29 17:10:19 crc kubenswrapper[4895]: I0129 17:10:19.632193 4895 generic.go:334] "Generic (PLEG): container finished" podID="250733c1-8f74-4b98-a527-1111914407e6" containerID="57e732ad7375c611d17766dbac897706c36d4680717b124f46e6df167620b916" exitCode=0 Jan 29 17:10:19 crc kubenswrapper[4895]: I0129 17:10:19.632248 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8nck" event={"ID":"250733c1-8f74-4b98-a527-1111914407e6","Type":"ContainerDied","Data":"57e732ad7375c611d17766dbac897706c36d4680717b124f46e6df167620b916"} Jan 29 17:10:20 crc kubenswrapper[4895]: I0129 17:10:20.641537 4895 generic.go:334] "Generic (PLEG): container finished" podID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" containerID="b26c4e79a52fcc7a6f1153724c7edcd40ba59fad4556c26bb58b1edf32b7d103" exitCode=0 Jan 29 17:10:20 crc kubenswrapper[4895]: I0129 17:10:20.641619 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfs2b" event={"ID":"e1a4c5f3-02fd-46ea-b907-2b6635b746fb","Type":"ContainerDied","Data":"b26c4e79a52fcc7a6f1153724c7edcd40ba59fad4556c26bb58b1edf32b7d103"} Jan 29 17:10:20 crc kubenswrapper[4895]: I0129 17:10:20.642297 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfs2b" event={"ID":"e1a4c5f3-02fd-46ea-b907-2b6635b746fb","Type":"ContainerStarted","Data":"1ad52bccc4152786994e421e59418a8c16100399f714c70bd732b1bcf4d1705b"} Jan 29 17:10:20 crc kubenswrapper[4895]: I0129 17:10:20.649647 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8nck" event={"ID":"250733c1-8f74-4b98-a527-1111914407e6","Type":"ContainerStarted","Data":"277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904"} Jan 29 17:10:20 crc kubenswrapper[4895]: I0129 17:10:20.701838 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l8nck" podStartSLOduration=2.247001559 podStartE2EDuration="4.701815313s" podCreationTimestamp="2026-01-29 17:10:16 +0000 UTC" firstStartedPulling="2026-01-29 17:10:17.598936705 +0000 UTC m=+3501.401913969" lastFinishedPulling="2026-01-29 17:10:20.053750459 +0000 UTC m=+3503.856727723" observedRunningTime="2026-01-29 17:10:20.691085301 +0000 UTC m=+3504.494062575" watchObservedRunningTime="2026-01-29 17:10:20.701815313 +0000 UTC m=+3504.504792577" Jan 29 17:10:21 crc kubenswrapper[4895]: I0129 17:10:21.660322 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfs2b" event={"ID":"e1a4c5f3-02fd-46ea-b907-2b6635b746fb","Type":"ContainerStarted","Data":"945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501"} Jan 29 17:10:22 crc kubenswrapper[4895]: I0129 17:10:22.670541 4895 generic.go:334] "Generic (PLEG): container finished" podID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" containerID="945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501" exitCode=0 Jan 29 17:10:22 crc kubenswrapper[4895]: I0129 17:10:22.670612 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfs2b" event={"ID":"e1a4c5f3-02fd-46ea-b907-2b6635b746fb","Type":"ContainerDied","Data":"945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501"} Jan 29 17:10:24 crc kubenswrapper[4895]: I0129 17:10:24.690195 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfs2b" event={"ID":"e1a4c5f3-02fd-46ea-b907-2b6635b746fb","Type":"ContainerStarted","Data":"998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be"} Jan 29 17:10:24 crc kubenswrapper[4895]: I0129 17:10:24.714408 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kfs2b" podStartSLOduration=3.399044101 podStartE2EDuration="6.714388548s" podCreationTimestamp="2026-01-29 17:10:18 +0000 UTC" firstStartedPulling="2026-01-29 17:10:20.642971885 +0000 UTC m=+3504.445949149" lastFinishedPulling="2026-01-29 17:10:23.958316332 +0000 UTC m=+3507.761293596" observedRunningTime="2026-01-29 17:10:24.711575362 +0000 UTC m=+3508.514552636" watchObservedRunningTime="2026-01-29 17:10:24.714388548 +0000 UTC m=+3508.517365812" Jan 29 17:10:26 crc kubenswrapper[4895]: I0129 17:10:26.663306 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:26 crc kubenswrapper[4895]: I0129 17:10:26.663852 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:26 crc kubenswrapper[4895]: I0129 17:10:26.715732 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:26 crc kubenswrapper[4895]: I0129 17:10:26.761444 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:26 crc kubenswrapper[4895]: I0129 17:10:26.929910 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:10:26 crc kubenswrapper[4895]: I0129 17:10:26.980201 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:10:27 crc kubenswrapper[4895]: I0129 17:10:27.123785 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l8nck"] Jan 29 17:10:28 crc kubenswrapper[4895]: I0129 17:10:28.739414 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l8nck" podUID="250733c1-8f74-4b98-a527-1111914407e6" containerName="registry-server" containerID="cri-o://277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904" gracePeriod=2 Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.068306 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.068848 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.117791 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.232151 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.305328 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-catalog-content\") pod \"250733c1-8f74-4b98-a527-1111914407e6\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.305386 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-utilities\") pod \"250733c1-8f74-4b98-a527-1111914407e6\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.305416 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wvk4\" (UniqueName: \"kubernetes.io/projected/250733c1-8f74-4b98-a527-1111914407e6-kube-api-access-5wvk4\") pod \"250733c1-8f74-4b98-a527-1111914407e6\" (UID: \"250733c1-8f74-4b98-a527-1111914407e6\") " Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.306215 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-utilities" (OuterVolumeSpecName: "utilities") pod "250733c1-8f74-4b98-a527-1111914407e6" (UID: "250733c1-8f74-4b98-a527-1111914407e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.311304 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250733c1-8f74-4b98-a527-1111914407e6-kube-api-access-5wvk4" (OuterVolumeSpecName: "kube-api-access-5wvk4") pod "250733c1-8f74-4b98-a527-1111914407e6" (UID: "250733c1-8f74-4b98-a527-1111914407e6"). InnerVolumeSpecName "kube-api-access-5wvk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.313188 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rdwzt"] Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.313459 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rdwzt" podUID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerName="registry-server" containerID="cri-o://190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5" gracePeriod=2 Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.369215 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "250733c1-8f74-4b98-a527-1111914407e6" (UID: "250733c1-8f74-4b98-a527-1111914407e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.407359 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.407390 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250733c1-8f74-4b98-a527-1111914407e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.407400 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wvk4\" (UniqueName: \"kubernetes.io/projected/250733c1-8f74-4b98-a527-1111914407e6-kube-api-access-5wvk4\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.673654 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.757255 4895 generic.go:334] "Generic (PLEG): container finished" podID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerID="190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5" exitCode=0 Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.757346 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rdwzt" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.757335 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdwzt" event={"ID":"9bc9e239-b65e-4860-9b32-8c2827e9c12a","Type":"ContainerDied","Data":"190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5"} Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.757476 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rdwzt" event={"ID":"9bc9e239-b65e-4860-9b32-8c2827e9c12a","Type":"ContainerDied","Data":"adc50ba0acc5eaeb5fa279883322284091246cd7df4e3cb93f296c2d8fc3d20f"} Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.757495 4895 scope.go:117] "RemoveContainer" containerID="190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.762715 4895 generic.go:334] "Generic (PLEG): container finished" podID="250733c1-8f74-4b98-a527-1111914407e6" containerID="277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904" exitCode=0 Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.762770 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8nck" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.762774 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8nck" event={"ID":"250733c1-8f74-4b98-a527-1111914407e6","Type":"ContainerDied","Data":"277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904"} Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.762830 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8nck" event={"ID":"250733c1-8f74-4b98-a527-1111914407e6","Type":"ContainerDied","Data":"99d727662c7163e8984dda73ac1ac842a092011dffe3f1685037382bc6bdbb4d"} Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.781806 4895 scope.go:117] "RemoveContainer" containerID="6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.801570 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l8nck"] Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.810736 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l8nck"] Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.814501 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-catalog-content\") pod \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.814653 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-utilities\") pod \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.814794 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl89q\" (UniqueName: \"kubernetes.io/projected/9bc9e239-b65e-4860-9b32-8c2827e9c12a-kube-api-access-gl89q\") pod \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\" (UID: \"9bc9e239-b65e-4860-9b32-8c2827e9c12a\") " Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.816562 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-utilities" (OuterVolumeSpecName: "utilities") pod "9bc9e239-b65e-4860-9b32-8c2827e9c12a" (UID: "9bc9e239-b65e-4860-9b32-8c2827e9c12a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.820339 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc9e239-b65e-4860-9b32-8c2827e9c12a-kube-api-access-gl89q" (OuterVolumeSpecName: "kube-api-access-gl89q") pod "9bc9e239-b65e-4860-9b32-8c2827e9c12a" (UID: "9bc9e239-b65e-4860-9b32-8c2827e9c12a"). InnerVolumeSpecName "kube-api-access-gl89q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.823346 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.829776 4895 scope.go:117] "RemoveContainer" containerID="8f0e73d878dc72fe254efc1238170b4c58cc7532655cfb88c55961a04878fac9" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.852700 4895 scope.go:117] "RemoveContainer" containerID="190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5" Jan 29 17:10:29 crc kubenswrapper[4895]: E0129 17:10:29.853182 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5\": container with ID starting with 190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5 not found: ID does not exist" containerID="190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.853218 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5"} err="failed to get container status \"190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5\": rpc error: code = NotFound desc = could not find container \"190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5\": container with ID starting with 190c2c9b5ea135146930ca18bf28e20f142ee4f790de9d779941d71fd1481fe5 not found: ID does not exist" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.853239 4895 scope.go:117] "RemoveContainer" containerID="6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48" Jan 29 17:10:29 crc kubenswrapper[4895]: E0129 17:10:29.853544 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48\": container with ID starting with 6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48 not found: ID does not exist" containerID="6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.853597 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48"} err="failed to get container status \"6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48\": rpc error: code = NotFound desc = could not find container \"6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48\": container with ID starting with 6f831f2df9a83dfe64f61440962a678369d8b0502924076c5748bc719fcc2d48 not found: ID does not exist" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.853623 4895 scope.go:117] "RemoveContainer" containerID="8f0e73d878dc72fe254efc1238170b4c58cc7532655cfb88c55961a04878fac9" Jan 29 17:10:29 crc kubenswrapper[4895]: E0129 17:10:29.853960 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f0e73d878dc72fe254efc1238170b4c58cc7532655cfb88c55961a04878fac9\": container with ID starting with 8f0e73d878dc72fe254efc1238170b4c58cc7532655cfb88c55961a04878fac9 not found: ID does not exist" containerID="8f0e73d878dc72fe254efc1238170b4c58cc7532655cfb88c55961a04878fac9" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.853984 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f0e73d878dc72fe254efc1238170b4c58cc7532655cfb88c55961a04878fac9"} err="failed to get container status \"8f0e73d878dc72fe254efc1238170b4c58cc7532655cfb88c55961a04878fac9\": rpc error: code = NotFound desc = could not find container \"8f0e73d878dc72fe254efc1238170b4c58cc7532655cfb88c55961a04878fac9\": container with ID starting with 8f0e73d878dc72fe254efc1238170b4c58cc7532655cfb88c55961a04878fac9 not found: ID does not exist" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.853998 4895 scope.go:117] "RemoveContainer" containerID="277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.917834 4895 scope.go:117] "RemoveContainer" containerID="57e732ad7375c611d17766dbac897706c36d4680717b124f46e6df167620b916" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.918837 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.918889 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl89q\" (UniqueName: \"kubernetes.io/projected/9bc9e239-b65e-4860-9b32-8c2827e9c12a-kube-api-access-gl89q\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.948980 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bc9e239-b65e-4860-9b32-8c2827e9c12a" (UID: "9bc9e239-b65e-4860-9b32-8c2827e9c12a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.952227 4895 scope.go:117] "RemoveContainer" containerID="a911fe10545555e93dd5494f3a1cb5301e37e2eb72364d6e8a9c4d660ff4999d" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.994377 4895 scope.go:117] "RemoveContainer" containerID="277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904" Jan 29 17:10:29 crc kubenswrapper[4895]: E0129 17:10:29.994982 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904\": container with ID starting with 277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904 not found: ID does not exist" containerID="277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.995030 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904"} err="failed to get container status \"277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904\": rpc error: code = NotFound desc = could not find container \"277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904\": container with ID starting with 277f6cb00a9cfb698c5525ada987240703a03b4e1f55f467ea05b722892c3904 not found: ID does not exist" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.995052 4895 scope.go:117] "RemoveContainer" containerID="57e732ad7375c611d17766dbac897706c36d4680717b124f46e6df167620b916" Jan 29 17:10:29 crc kubenswrapper[4895]: E0129 17:10:29.995456 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57e732ad7375c611d17766dbac897706c36d4680717b124f46e6df167620b916\": container with ID starting with 57e732ad7375c611d17766dbac897706c36d4680717b124f46e6df167620b916 not found: ID does not exist" containerID="57e732ad7375c611d17766dbac897706c36d4680717b124f46e6df167620b916" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.995495 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e732ad7375c611d17766dbac897706c36d4680717b124f46e6df167620b916"} err="failed to get container status \"57e732ad7375c611d17766dbac897706c36d4680717b124f46e6df167620b916\": rpc error: code = NotFound desc = could not find container \"57e732ad7375c611d17766dbac897706c36d4680717b124f46e6df167620b916\": container with ID starting with 57e732ad7375c611d17766dbac897706c36d4680717b124f46e6df167620b916 not found: ID does not exist" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.995532 4895 scope.go:117] "RemoveContainer" containerID="a911fe10545555e93dd5494f3a1cb5301e37e2eb72364d6e8a9c4d660ff4999d" Jan 29 17:10:29 crc kubenswrapper[4895]: E0129 17:10:29.996149 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a911fe10545555e93dd5494f3a1cb5301e37e2eb72364d6e8a9c4d660ff4999d\": container with ID starting with a911fe10545555e93dd5494f3a1cb5301e37e2eb72364d6e8a9c4d660ff4999d not found: ID does not exist" containerID="a911fe10545555e93dd5494f3a1cb5301e37e2eb72364d6e8a9c4d660ff4999d" Jan 29 17:10:29 crc kubenswrapper[4895]: I0129 17:10:29.996177 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a911fe10545555e93dd5494f3a1cb5301e37e2eb72364d6e8a9c4d660ff4999d"} err="failed to get container status \"a911fe10545555e93dd5494f3a1cb5301e37e2eb72364d6e8a9c4d660ff4999d\": rpc error: code = NotFound desc = could not find container \"a911fe10545555e93dd5494f3a1cb5301e37e2eb72364d6e8a9c4d660ff4999d\": container with ID starting with a911fe10545555e93dd5494f3a1cb5301e37e2eb72364d6e8a9c4d660ff4999d not found: ID does not exist" Jan 29 17:10:30 crc kubenswrapper[4895]: I0129 17:10:30.020447 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9e239-b65e-4860-9b32-8c2827e9c12a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:30 crc kubenswrapper[4895]: I0129 17:10:30.097808 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rdwzt"] Jan 29 17:10:30 crc kubenswrapper[4895]: I0129 17:10:30.106805 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rdwzt"] Jan 29 17:10:31 crc kubenswrapper[4895]: I0129 17:10:31.049988 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250733c1-8f74-4b98-a527-1111914407e6" path="/var/lib/kubelet/pods/250733c1-8f74-4b98-a527-1111914407e6/volumes" Jan 29 17:10:31 crc kubenswrapper[4895]: I0129 17:10:31.050988 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" path="/var/lib/kubelet/pods/9bc9e239-b65e-4860-9b32-8c2827e9c12a/volumes" Jan 29 17:10:31 crc kubenswrapper[4895]: I0129 17:10:31.717185 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kfs2b"] Jan 29 17:10:32 crc kubenswrapper[4895]: I0129 17:10:32.795636 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kfs2b" podUID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" containerName="registry-server" containerID="cri-o://998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be" gracePeriod=2 Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.272305 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.398016 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-utilities\") pod \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.398151 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jgpl\" (UniqueName: \"kubernetes.io/projected/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-kube-api-access-2jgpl\") pod \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.398186 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-catalog-content\") pod \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\" (UID: \"e1a4c5f3-02fd-46ea-b907-2b6635b746fb\") " Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.398801 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-utilities" (OuterVolumeSpecName: "utilities") pod "e1a4c5f3-02fd-46ea-b907-2b6635b746fb" (UID: "e1a4c5f3-02fd-46ea-b907-2b6635b746fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.423364 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-kube-api-access-2jgpl" (OuterVolumeSpecName: "kube-api-access-2jgpl") pod "e1a4c5f3-02fd-46ea-b907-2b6635b746fb" (UID: "e1a4c5f3-02fd-46ea-b907-2b6635b746fb"). InnerVolumeSpecName "kube-api-access-2jgpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.450648 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1a4c5f3-02fd-46ea-b907-2b6635b746fb" (UID: "e1a4c5f3-02fd-46ea-b907-2b6635b746fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.500313 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jgpl\" (UniqueName: \"kubernetes.io/projected/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-kube-api-access-2jgpl\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.500360 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.500372 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1a4c5f3-02fd-46ea-b907-2b6635b746fb-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.809712 4895 generic.go:334] "Generic (PLEG): container finished" podID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" containerID="998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be" exitCode=0 Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.809758 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfs2b" event={"ID":"e1a4c5f3-02fd-46ea-b907-2b6635b746fb","Type":"ContainerDied","Data":"998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be"} Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.809787 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kfs2b" event={"ID":"e1a4c5f3-02fd-46ea-b907-2b6635b746fb","Type":"ContainerDied","Data":"1ad52bccc4152786994e421e59418a8c16100399f714c70bd732b1bcf4d1705b"} Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.809805 4895 scope.go:117] "RemoveContainer" containerID="998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.809820 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kfs2b" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.844623 4895 scope.go:117] "RemoveContainer" containerID="945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.860742 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kfs2b"] Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.881562 4895 scope.go:117] "RemoveContainer" containerID="b26c4e79a52fcc7a6f1153724c7edcd40ba59fad4556c26bb58b1edf32b7d103" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.886809 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kfs2b"] Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.921075 4895 scope.go:117] "RemoveContainer" containerID="998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be" Jan 29 17:10:33 crc kubenswrapper[4895]: E0129 17:10:33.921580 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be\": container with ID starting with 998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be not found: ID does not exist" containerID="998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.921631 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be"} err="failed to get container status \"998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be\": rpc error: code = NotFound desc = could not find container \"998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be\": container with ID starting with 998c8186ab21d9bf867abcebbe03fe48cd2686c442b2f749d70f55a54950d6be not found: ID does not exist" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.921660 4895 scope.go:117] "RemoveContainer" containerID="945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501" Jan 29 17:10:33 crc kubenswrapper[4895]: E0129 17:10:33.922208 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501\": container with ID starting with 945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501 not found: ID does not exist" containerID="945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.922245 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501"} err="failed to get container status \"945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501\": rpc error: code = NotFound desc = could not find container \"945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501\": container with ID starting with 945d70e0cfb30d1aa82c1c14654fc90a61c8fd3f04361ff3447f36db934ce501 not found: ID does not exist" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.922280 4895 scope.go:117] "RemoveContainer" containerID="b26c4e79a52fcc7a6f1153724c7edcd40ba59fad4556c26bb58b1edf32b7d103" Jan 29 17:10:33 crc kubenswrapper[4895]: E0129 17:10:33.922579 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b26c4e79a52fcc7a6f1153724c7edcd40ba59fad4556c26bb58b1edf32b7d103\": container with ID starting with b26c4e79a52fcc7a6f1153724c7edcd40ba59fad4556c26bb58b1edf32b7d103 not found: ID does not exist" containerID="b26c4e79a52fcc7a6f1153724c7edcd40ba59fad4556c26bb58b1edf32b7d103" Jan 29 17:10:33 crc kubenswrapper[4895]: I0129 17:10:33.922600 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26c4e79a52fcc7a6f1153724c7edcd40ba59fad4556c26bb58b1edf32b7d103"} err="failed to get container status \"b26c4e79a52fcc7a6f1153724c7edcd40ba59fad4556c26bb58b1edf32b7d103\": rpc error: code = NotFound desc = could not find container \"b26c4e79a52fcc7a6f1153724c7edcd40ba59fad4556c26bb58b1edf32b7d103\": container with ID starting with b26c4e79a52fcc7a6f1153724c7edcd40ba59fad4556c26bb58b1edf32b7d103 not found: ID does not exist" Jan 29 17:10:35 crc kubenswrapper[4895]: I0129 17:10:35.049591 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" path="/var/lib/kubelet/pods/e1a4c5f3-02fd-46ea-b907-2b6635b746fb/volumes" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.544438 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 17:10:46 crc kubenswrapper[4895]: E0129 17:10:46.545502 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerName="extract-content" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545520 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerName="extract-content" Jan 29 17:10:46 crc kubenswrapper[4895]: E0129 17:10:46.545537 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250733c1-8f74-4b98-a527-1111914407e6" containerName="extract-content" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545547 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="250733c1-8f74-4b98-a527-1111914407e6" containerName="extract-content" Jan 29 17:10:46 crc kubenswrapper[4895]: E0129 17:10:46.545567 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250733c1-8f74-4b98-a527-1111914407e6" containerName="extract-utilities" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545575 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="250733c1-8f74-4b98-a527-1111914407e6" containerName="extract-utilities" Jan 29 17:10:46 crc kubenswrapper[4895]: E0129 17:10:46.545598 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerName="registry-server" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545605 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerName="registry-server" Jan 29 17:10:46 crc kubenswrapper[4895]: E0129 17:10:46.545614 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerName="extract-utilities" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545621 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerName="extract-utilities" Jan 29 17:10:46 crc kubenswrapper[4895]: E0129 17:10:46.545639 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" containerName="registry-server" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545646 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" containerName="registry-server" Jan 29 17:10:46 crc kubenswrapper[4895]: E0129 17:10:46.545657 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250733c1-8f74-4b98-a527-1111914407e6" containerName="registry-server" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545664 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="250733c1-8f74-4b98-a527-1111914407e6" containerName="registry-server" Jan 29 17:10:46 crc kubenswrapper[4895]: E0129 17:10:46.545679 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" containerName="extract-content" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545687 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" containerName="extract-content" Jan 29 17:10:46 crc kubenswrapper[4895]: E0129 17:10:46.545711 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" containerName="extract-utilities" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545718 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" containerName="extract-utilities" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545935 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a4c5f3-02fd-46ea-b907-2b6635b746fb" containerName="registry-server" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545956 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="250733c1-8f74-4b98-a527-1111914407e6" containerName="registry-server" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.545967 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc9e239-b65e-4860-9b32-8c2827e9c12a" containerName="registry-server" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.546622 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.549245 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.549298 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.549471 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9tlm5" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.556984 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.558256 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.669055 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-config-data\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.669384 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.669406 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.669450 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.669475 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.669633 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84m6t\" (UniqueName: \"kubernetes.io/projected/fa7221ee-55be-4a14-8149-7299f46d1f0d-kube-api-access-84m6t\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.669678 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.669783 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.670049 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.771353 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.771417 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-config-data\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.772127 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.772182 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.772318 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.772409 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.772673 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-config-data\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.772679 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84m6t\" (UniqueName: \"kubernetes.io/projected/fa7221ee-55be-4a14-8149-7299f46d1f0d-kube-api-access-84m6t\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.772746 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.772797 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.773525 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.771905 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.774052 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.774093 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.779397 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.779592 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.780432 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.791074 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84m6t\" (UniqueName: \"kubernetes.io/projected/fa7221ee-55be-4a14-8149-7299f46d1f0d-kube-api-access-84m6t\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.799334 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " pod="openstack/tempest-tests-tempest" Jan 29 17:10:46 crc kubenswrapper[4895]: I0129 17:10:46.869095 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 17:10:47 crc kubenswrapper[4895]: I0129 17:10:47.324786 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 17:10:47 crc kubenswrapper[4895]: I0129 17:10:47.938855 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa7221ee-55be-4a14-8149-7299f46d1f0d","Type":"ContainerStarted","Data":"4be67b90928c44a63c52a3335a770229cd4ff7533b0c39aa085725582271fc2f"} Jan 29 17:11:19 crc kubenswrapper[4895]: E0129 17:11:19.557009 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 29 17:11:19 crc kubenswrapper[4895]: E0129 17:11:19.557825 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84m6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(fa7221ee-55be-4a14-8149-7299f46d1f0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 17:11:19 crc kubenswrapper[4895]: E0129 17:11:19.558969 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="fa7221ee-55be-4a14-8149-7299f46d1f0d" Jan 29 17:11:20 crc kubenswrapper[4895]: E0129 17:11:20.233349 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="fa7221ee-55be-4a14-8149-7299f46d1f0d" Jan 29 17:11:27 crc kubenswrapper[4895]: I0129 17:11:27.823413 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:11:27 crc kubenswrapper[4895]: I0129 17:11:27.824113 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:11:35 crc kubenswrapper[4895]: I0129 17:11:35.994022 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 29 17:11:37 crc kubenswrapper[4895]: I0129 17:11:37.394567 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa7221ee-55be-4a14-8149-7299f46d1f0d","Type":"ContainerStarted","Data":"6e0dee3c09d878b5f048563b6b9dd16c332cacadb1ab473c8c85e594743043a6"} Jan 29 17:11:37 crc kubenswrapper[4895]: I0129 17:11:37.419734 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.765597901 podStartE2EDuration="52.419715941s" podCreationTimestamp="2026-01-29 17:10:45 +0000 UTC" firstStartedPulling="2026-01-29 17:10:47.336140884 +0000 UTC m=+3531.139118158" lastFinishedPulling="2026-01-29 17:11:35.990258924 +0000 UTC m=+3579.793236198" observedRunningTime="2026-01-29 17:11:37.417317936 +0000 UTC m=+3581.220295200" watchObservedRunningTime="2026-01-29 17:11:37.419715941 +0000 UTC m=+3581.222693205" Jan 29 17:11:57 crc kubenswrapper[4895]: I0129 17:11:57.823517 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:11:57 crc kubenswrapper[4895]: I0129 17:11:57.824214 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:12:27 crc kubenswrapper[4895]: I0129 17:12:27.823776 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:12:27 crc kubenswrapper[4895]: I0129 17:12:27.825597 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:12:27 crc kubenswrapper[4895]: I0129 17:12:27.825700 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 17:12:27 crc kubenswrapper[4895]: I0129 17:12:27.826815 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8521355b0f55a845d2778709168b873fa7370171e7b215ad9f2d5fc6646fbd29"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:12:27 crc kubenswrapper[4895]: I0129 17:12:27.826981 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://8521355b0f55a845d2778709168b873fa7370171e7b215ad9f2d5fc6646fbd29" gracePeriod=600 Jan 29 17:12:28 crc kubenswrapper[4895]: I0129 17:12:28.851876 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="8521355b0f55a845d2778709168b873fa7370171e7b215ad9f2d5fc6646fbd29" exitCode=0 Jan 29 17:12:28 crc kubenswrapper[4895]: I0129 17:12:28.852130 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"8521355b0f55a845d2778709168b873fa7370171e7b215ad9f2d5fc6646fbd29"} Jan 29 17:12:28 crc kubenswrapper[4895]: I0129 17:12:28.852454 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6"} Jan 29 17:12:28 crc kubenswrapper[4895]: I0129 17:12:28.852478 4895 scope.go:117] "RemoveContainer" containerID="ea01226ad6eb43b9d85e33b162a796fbda469f974f0b4fa1545f4e735fa33dba" Jan 29 17:14:57 crc kubenswrapper[4895]: I0129 17:14:57.822809 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:14:57 crc kubenswrapper[4895]: I0129 17:14:57.823522 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.159928 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc"] Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.162082 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.164881 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.165190 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.176498 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc"] Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.272039 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a06d35c8-b993-4b39-b3cd-da277785beaa-config-volume\") pod \"collect-profiles-29495115-wtqjc\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.272515 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fqwd\" (UniqueName: \"kubernetes.io/projected/a06d35c8-b993-4b39-b3cd-da277785beaa-kube-api-access-9fqwd\") pod \"collect-profiles-29495115-wtqjc\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.272795 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a06d35c8-b993-4b39-b3cd-da277785beaa-secret-volume\") pod \"collect-profiles-29495115-wtqjc\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.375480 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fqwd\" (UniqueName: \"kubernetes.io/projected/a06d35c8-b993-4b39-b3cd-da277785beaa-kube-api-access-9fqwd\") pod \"collect-profiles-29495115-wtqjc\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.375592 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a06d35c8-b993-4b39-b3cd-da277785beaa-secret-volume\") pod \"collect-profiles-29495115-wtqjc\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.375651 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a06d35c8-b993-4b39-b3cd-da277785beaa-config-volume\") pod \"collect-profiles-29495115-wtqjc\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.376808 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a06d35c8-b993-4b39-b3cd-da277785beaa-config-volume\") pod \"collect-profiles-29495115-wtqjc\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.384828 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a06d35c8-b993-4b39-b3cd-da277785beaa-secret-volume\") pod \"collect-profiles-29495115-wtqjc\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.395334 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fqwd\" (UniqueName: \"kubernetes.io/projected/a06d35c8-b993-4b39-b3cd-da277785beaa-kube-api-access-9fqwd\") pod \"collect-profiles-29495115-wtqjc\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:00 crc kubenswrapper[4895]: I0129 17:15:00.496804 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:01 crc kubenswrapper[4895]: I0129 17:15:01.073386 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc"] Jan 29 17:15:01 crc kubenswrapper[4895]: I0129 17:15:01.218770 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" event={"ID":"a06d35c8-b993-4b39-b3cd-da277785beaa","Type":"ContainerStarted","Data":"2d1fa9b5d916e4fcabb9aee61b778832e79064424488ae706dde866dac9c9c64"} Jan 29 17:15:02 crc kubenswrapper[4895]: I0129 17:15:02.232010 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" event={"ID":"a06d35c8-b993-4b39-b3cd-da277785beaa","Type":"ContainerStarted","Data":"ce4b7e653b07bd19f615926d8fe8af9b9091376d4b31b406c5edd03ce716739d"} Jan 29 17:15:02 crc kubenswrapper[4895]: I0129 17:15:02.260533 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" podStartSLOduration=2.260509723 podStartE2EDuration="2.260509723s" podCreationTimestamp="2026-01-29 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:15:02.249947676 +0000 UTC m=+3786.052924960" watchObservedRunningTime="2026-01-29 17:15:02.260509723 +0000 UTC m=+3786.063486997" Jan 29 17:15:03 crc kubenswrapper[4895]: I0129 17:15:03.242165 4895 generic.go:334] "Generic (PLEG): container finished" podID="a06d35c8-b993-4b39-b3cd-da277785beaa" containerID="ce4b7e653b07bd19f615926d8fe8af9b9091376d4b31b406c5edd03ce716739d" exitCode=0 Jan 29 17:15:03 crc kubenswrapper[4895]: I0129 17:15:03.242241 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" event={"ID":"a06d35c8-b993-4b39-b3cd-da277785beaa","Type":"ContainerDied","Data":"ce4b7e653b07bd19f615926d8fe8af9b9091376d4b31b406c5edd03ce716739d"} Jan 29 17:15:04 crc kubenswrapper[4895]: I0129 17:15:04.896772 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:04 crc kubenswrapper[4895]: I0129 17:15:04.972495 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fqwd\" (UniqueName: \"kubernetes.io/projected/a06d35c8-b993-4b39-b3cd-da277785beaa-kube-api-access-9fqwd\") pod \"a06d35c8-b993-4b39-b3cd-da277785beaa\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " Jan 29 17:15:04 crc kubenswrapper[4895]: I0129 17:15:04.972588 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a06d35c8-b993-4b39-b3cd-da277785beaa-secret-volume\") pod \"a06d35c8-b993-4b39-b3cd-da277785beaa\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " Jan 29 17:15:04 crc kubenswrapper[4895]: I0129 17:15:04.972828 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a06d35c8-b993-4b39-b3cd-da277785beaa-config-volume\") pod \"a06d35c8-b993-4b39-b3cd-da277785beaa\" (UID: \"a06d35c8-b993-4b39-b3cd-da277785beaa\") " Jan 29 17:15:04 crc kubenswrapper[4895]: I0129 17:15:04.973723 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06d35c8-b993-4b39-b3cd-da277785beaa-config-volume" (OuterVolumeSpecName: "config-volume") pod "a06d35c8-b993-4b39-b3cd-da277785beaa" (UID: "a06d35c8-b993-4b39-b3cd-da277785beaa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:15:05 crc kubenswrapper[4895]: I0129 17:15:05.001982 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06d35c8-b993-4b39-b3cd-da277785beaa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a06d35c8-b993-4b39-b3cd-da277785beaa" (UID: "a06d35c8-b993-4b39-b3cd-da277785beaa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:15:05 crc kubenswrapper[4895]: I0129 17:15:05.002308 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06d35c8-b993-4b39-b3cd-da277785beaa-kube-api-access-9fqwd" (OuterVolumeSpecName: "kube-api-access-9fqwd") pod "a06d35c8-b993-4b39-b3cd-da277785beaa" (UID: "a06d35c8-b993-4b39-b3cd-da277785beaa"). InnerVolumeSpecName "kube-api-access-9fqwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:15:05 crc kubenswrapper[4895]: I0129 17:15:05.075739 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a06d35c8-b993-4b39-b3cd-da277785beaa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:05 crc kubenswrapper[4895]: I0129 17:15:05.076144 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a06d35c8-b993-4b39-b3cd-da277785beaa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:05 crc kubenswrapper[4895]: I0129 17:15:05.076158 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fqwd\" (UniqueName: \"kubernetes.io/projected/a06d35c8-b993-4b39-b3cd-da277785beaa-kube-api-access-9fqwd\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:05 crc kubenswrapper[4895]: I0129 17:15:05.260907 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" event={"ID":"a06d35c8-b993-4b39-b3cd-da277785beaa","Type":"ContainerDied","Data":"2d1fa9b5d916e4fcabb9aee61b778832e79064424488ae706dde866dac9c9c64"} Jan 29 17:15:05 crc kubenswrapper[4895]: I0129 17:15:05.260957 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1fa9b5d916e4fcabb9aee61b778832e79064424488ae706dde866dac9c9c64" Jan 29 17:15:05 crc kubenswrapper[4895]: I0129 17:15:05.260936 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-wtqjc" Jan 29 17:15:05 crc kubenswrapper[4895]: I0129 17:15:05.335459 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s"] Jan 29 17:15:05 crc kubenswrapper[4895]: I0129 17:15:05.344500 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-xsl9s"] Jan 29 17:15:07 crc kubenswrapper[4895]: I0129 17:15:07.053074 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe15df8-b7a5-40c0-b0d0-5cd2ec699991" path="/var/lib/kubelet/pods/afe15df8-b7a5-40c0-b0d0-5cd2ec699991/volumes" Jan 29 17:15:19 crc kubenswrapper[4895]: I0129 17:15:19.563189 4895 scope.go:117] "RemoveContainer" containerID="05bcc2984662fca6527da0038d4ae401048308d59901dbed2914cc292d6c31d4" Jan 29 17:15:27 crc kubenswrapper[4895]: I0129 17:15:27.823291 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:15:27 crc kubenswrapper[4895]: I0129 17:15:27.823974 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:15:57 crc kubenswrapper[4895]: I0129 17:15:57.823600 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:15:57 crc kubenswrapper[4895]: I0129 17:15:57.824182 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:15:57 crc kubenswrapper[4895]: I0129 17:15:57.824227 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 17:15:57 crc kubenswrapper[4895]: I0129 17:15:57.825143 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:15:57 crc kubenswrapper[4895]: I0129 17:15:57.825202 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" gracePeriod=600 Jan 29 17:15:57 crc kubenswrapper[4895]: E0129 17:15:57.976960 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:15:58 crc kubenswrapper[4895]: I0129 17:15:58.734751 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" exitCode=0 Jan 29 17:15:58 crc kubenswrapper[4895]: I0129 17:15:58.734972 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6"} Jan 29 17:15:58 crc kubenswrapper[4895]: I0129 17:15:58.735153 4895 scope.go:117] "RemoveContainer" containerID="8521355b0f55a845d2778709168b873fa7370171e7b215ad9f2d5fc6646fbd29" Jan 29 17:15:58 crc kubenswrapper[4895]: I0129 17:15:58.736409 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:15:58 crc kubenswrapper[4895]: E0129 17:15:58.736840 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:16:09 crc kubenswrapper[4895]: I0129 17:16:09.037591 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:16:09 crc kubenswrapper[4895]: E0129 17:16:09.039168 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:16:21 crc kubenswrapper[4895]: I0129 17:16:21.037979 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:16:21 crc kubenswrapper[4895]: E0129 17:16:21.039766 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:16:35 crc kubenswrapper[4895]: I0129 17:16:35.037611 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:16:35 crc kubenswrapper[4895]: E0129 17:16:35.038422 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:16:47 crc kubenswrapper[4895]: I0129 17:16:47.041938 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:16:47 crc kubenswrapper[4895]: E0129 17:16:47.042715 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:17:00 crc kubenswrapper[4895]: I0129 17:17:00.038048 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:17:00 crc kubenswrapper[4895]: E0129 17:17:00.039742 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:17:11 crc kubenswrapper[4895]: I0129 17:17:11.037137 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:17:11 crc kubenswrapper[4895]: E0129 17:17:11.038061 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:17:24 crc kubenswrapper[4895]: I0129 17:17:24.036346 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:17:24 crc kubenswrapper[4895]: E0129 17:17:24.037075 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:17:36 crc kubenswrapper[4895]: I0129 17:17:36.058217 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-77da-account-create-update-jmwks"] Jan 29 17:17:36 crc kubenswrapper[4895]: I0129 17:17:36.066018 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-77da-account-create-update-jmwks"] Jan 29 17:17:36 crc kubenswrapper[4895]: I0129 17:17:36.074999 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-7pb72"] Jan 29 17:17:36 crc kubenswrapper[4895]: I0129 17:17:36.083693 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-7pb72"] Jan 29 17:17:37 crc kubenswrapper[4895]: I0129 17:17:37.050464 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88eecffd-f694-4bce-b938-966e62335540" path="/var/lib/kubelet/pods/88eecffd-f694-4bce-b938-966e62335540/volumes" Jan 29 17:17:37 crc kubenswrapper[4895]: I0129 17:17:37.051608 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee8529db-c2f6-4976-a6ff-19c73dca11ab" path="/var/lib/kubelet/pods/ee8529db-c2f6-4976-a6ff-19c73dca11ab/volumes" Jan 29 17:17:39 crc kubenswrapper[4895]: I0129 17:17:39.037236 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:17:39 crc kubenswrapper[4895]: E0129 17:17:39.037985 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:17:54 crc kubenswrapper[4895]: I0129 17:17:54.036798 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:17:54 crc kubenswrapper[4895]: E0129 17:17:54.037753 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:18:05 crc kubenswrapper[4895]: I0129 17:18:05.037144 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:18:05 crc kubenswrapper[4895]: E0129 17:18:05.038058 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:18:18 crc kubenswrapper[4895]: I0129 17:18:18.036501 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:18:18 crc kubenswrapper[4895]: E0129 17:18:18.037411 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:18:19 crc kubenswrapper[4895]: I0129 17:18:19.669067 4895 scope.go:117] "RemoveContainer" containerID="89fcd8b854c27057381b74aa48e610291a1d6b638ea24b24b0b5eb9f2e397ffd" Jan 29 17:18:19 crc kubenswrapper[4895]: I0129 17:18:19.709177 4895 scope.go:117] "RemoveContainer" containerID="a23e5a5fa8b8e9072e075e8f25c7d5368f2abd57f83649443d23374eee2523bd" Jan 29 17:18:31 crc kubenswrapper[4895]: I0129 17:18:31.037605 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:18:31 crc kubenswrapper[4895]: E0129 17:18:31.038745 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:18:43 crc kubenswrapper[4895]: I0129 17:18:43.037609 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:18:43 crc kubenswrapper[4895]: E0129 17:18:43.038765 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:18:49 crc kubenswrapper[4895]: I0129 17:18:49.059446 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-gmsgc"] Jan 29 17:18:49 crc kubenswrapper[4895]: I0129 17:18:49.067635 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-gmsgc"] Jan 29 17:18:51 crc kubenswrapper[4895]: I0129 17:18:51.063621 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae3f812e-d3cf-4cac-b58f-bb93fe0557bd" path="/var/lib/kubelet/pods/ae3f812e-d3cf-4cac-b58f-bb93fe0557bd/volumes" Jan 29 17:18:54 crc kubenswrapper[4895]: I0129 17:18:54.037527 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:18:54 crc kubenswrapper[4895]: E0129 17:18:54.038332 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:19:09 crc kubenswrapper[4895]: I0129 17:19:09.037466 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:19:09 crc kubenswrapper[4895]: E0129 17:19:09.038778 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:19:19 crc kubenswrapper[4895]: I0129 17:19:19.805181 4895 scope.go:117] "RemoveContainer" containerID="1bb39dc9487955ca75523c7b2297397729282287ce5337d94d812fd137077d8b" Jan 29 17:19:22 crc kubenswrapper[4895]: I0129 17:19:22.920537 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kzmnd"] Jan 29 17:19:22 crc kubenswrapper[4895]: E0129 17:19:22.921622 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06d35c8-b993-4b39-b3cd-da277785beaa" containerName="collect-profiles" Jan 29 17:19:22 crc kubenswrapper[4895]: I0129 17:19:22.921640 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06d35c8-b993-4b39-b3cd-da277785beaa" containerName="collect-profiles" Jan 29 17:19:22 crc kubenswrapper[4895]: I0129 17:19:22.921918 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a06d35c8-b993-4b39-b3cd-da277785beaa" containerName="collect-profiles" Jan 29 17:19:22 crc kubenswrapper[4895]: I0129 17:19:22.923665 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:22 crc kubenswrapper[4895]: I0129 17:19:22.937742 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzmnd"] Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.001559 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-catalog-content\") pod \"redhat-marketplace-kzmnd\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.002102 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbkj\" (UniqueName: \"kubernetes.io/projected/7e39a118-0362-4129-8837-3e9272e1f318-kube-api-access-vfbkj\") pod \"redhat-marketplace-kzmnd\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.002186 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-utilities\") pod \"redhat-marketplace-kzmnd\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.037637 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:19:23 crc kubenswrapper[4895]: E0129 17:19:23.037972 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.104062 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfbkj\" (UniqueName: \"kubernetes.io/projected/7e39a118-0362-4129-8837-3e9272e1f318-kube-api-access-vfbkj\") pod \"redhat-marketplace-kzmnd\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.104160 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-utilities\") pod \"redhat-marketplace-kzmnd\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.104425 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-catalog-content\") pod \"redhat-marketplace-kzmnd\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.104662 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-utilities\") pod \"redhat-marketplace-kzmnd\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.105237 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-catalog-content\") pod \"redhat-marketplace-kzmnd\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.129076 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfbkj\" (UniqueName: \"kubernetes.io/projected/7e39a118-0362-4129-8837-3e9272e1f318-kube-api-access-vfbkj\") pod \"redhat-marketplace-kzmnd\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.249948 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:23 crc kubenswrapper[4895]: I0129 17:19:23.967775 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzmnd"] Jan 29 17:19:23 crc kubenswrapper[4895]: W0129 17:19:23.973284 4895 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e39a118_0362_4129_8837_3e9272e1f318.slice/crio-d4575152972dc38839506137105ee89d8d68df3d770f62abb9a38b35c543c263 WatchSource:0}: Error finding container d4575152972dc38839506137105ee89d8d68df3d770f62abb9a38b35c543c263: Status 404 returned error can't find the container with id d4575152972dc38839506137105ee89d8d68df3d770f62abb9a38b35c543c263 Jan 29 17:19:24 crc kubenswrapper[4895]: I0129 17:19:24.395114 4895 generic.go:334] "Generic (PLEG): container finished" podID="7e39a118-0362-4129-8837-3e9272e1f318" containerID="e85a892680cd54306bbbb2e921c6347c606ff395712a556888cc543ecfb9abbd" exitCode=0 Jan 29 17:19:24 crc kubenswrapper[4895]: I0129 17:19:24.395212 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzmnd" event={"ID":"7e39a118-0362-4129-8837-3e9272e1f318","Type":"ContainerDied","Data":"e85a892680cd54306bbbb2e921c6347c606ff395712a556888cc543ecfb9abbd"} Jan 29 17:19:24 crc kubenswrapper[4895]: I0129 17:19:24.395516 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzmnd" event={"ID":"7e39a118-0362-4129-8837-3e9272e1f318","Type":"ContainerStarted","Data":"d4575152972dc38839506137105ee89d8d68df3d770f62abb9a38b35c543c263"} Jan 29 17:19:24 crc kubenswrapper[4895]: I0129 17:19:24.396989 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:19:25 crc kubenswrapper[4895]: I0129 17:19:25.404382 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzmnd" event={"ID":"7e39a118-0362-4129-8837-3e9272e1f318","Type":"ContainerStarted","Data":"da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef"} Jan 29 17:19:26 crc kubenswrapper[4895]: I0129 17:19:26.414562 4895 generic.go:334] "Generic (PLEG): container finished" podID="7e39a118-0362-4129-8837-3e9272e1f318" containerID="da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef" exitCode=0 Jan 29 17:19:26 crc kubenswrapper[4895]: I0129 17:19:26.414662 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzmnd" event={"ID":"7e39a118-0362-4129-8837-3e9272e1f318","Type":"ContainerDied","Data":"da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef"} Jan 29 17:19:27 crc kubenswrapper[4895]: I0129 17:19:27.424744 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzmnd" event={"ID":"7e39a118-0362-4129-8837-3e9272e1f318","Type":"ContainerStarted","Data":"462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7"} Jan 29 17:19:27 crc kubenswrapper[4895]: I0129 17:19:27.454098 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kzmnd" podStartSLOduration=2.951775009 podStartE2EDuration="5.454073221s" podCreationTimestamp="2026-01-29 17:19:22 +0000 UTC" firstStartedPulling="2026-01-29 17:19:24.396680681 +0000 UTC m=+4048.199657945" lastFinishedPulling="2026-01-29 17:19:26.898978863 +0000 UTC m=+4050.701956157" observedRunningTime="2026-01-29 17:19:27.446440256 +0000 UTC m=+4051.249417530" watchObservedRunningTime="2026-01-29 17:19:27.454073221 +0000 UTC m=+4051.257050495" Jan 29 17:19:33 crc kubenswrapper[4895]: I0129 17:19:33.252799 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:33 crc kubenswrapper[4895]: I0129 17:19:33.253453 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:33 crc kubenswrapper[4895]: I0129 17:19:33.358948 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:33 crc kubenswrapper[4895]: I0129 17:19:33.519262 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:33 crc kubenswrapper[4895]: I0129 17:19:33.594099 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzmnd"] Jan 29 17:19:35 crc kubenswrapper[4895]: I0129 17:19:35.492453 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kzmnd" podUID="7e39a118-0362-4129-8837-3e9272e1f318" containerName="registry-server" containerID="cri-o://462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7" gracePeriod=2 Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.037008 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:19:36 crc kubenswrapper[4895]: E0129 17:19:36.037898 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.065668 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.161260 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfbkj\" (UniqueName: \"kubernetes.io/projected/7e39a118-0362-4129-8837-3e9272e1f318-kube-api-access-vfbkj\") pod \"7e39a118-0362-4129-8837-3e9272e1f318\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.161517 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-utilities\") pod \"7e39a118-0362-4129-8837-3e9272e1f318\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.161580 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-catalog-content\") pod \"7e39a118-0362-4129-8837-3e9272e1f318\" (UID: \"7e39a118-0362-4129-8837-3e9272e1f318\") " Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.162362 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-utilities" (OuterVolumeSpecName: "utilities") pod "7e39a118-0362-4129-8837-3e9272e1f318" (UID: "7e39a118-0362-4129-8837-3e9272e1f318"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.162582 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.170746 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e39a118-0362-4129-8837-3e9272e1f318-kube-api-access-vfbkj" (OuterVolumeSpecName: "kube-api-access-vfbkj") pod "7e39a118-0362-4129-8837-3e9272e1f318" (UID: "7e39a118-0362-4129-8837-3e9272e1f318"). InnerVolumeSpecName "kube-api-access-vfbkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.180333 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e39a118-0362-4129-8837-3e9272e1f318" (UID: "7e39a118-0362-4129-8837-3e9272e1f318"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.264445 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e39a118-0362-4129-8837-3e9272e1f318-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.264494 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfbkj\" (UniqueName: \"kubernetes.io/projected/7e39a118-0362-4129-8837-3e9272e1f318-kube-api-access-vfbkj\") on node \"crc\" DevicePath \"\"" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.503254 4895 generic.go:334] "Generic (PLEG): container finished" podID="7e39a118-0362-4129-8837-3e9272e1f318" containerID="462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7" exitCode=0 Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.503333 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzmnd" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.503347 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzmnd" event={"ID":"7e39a118-0362-4129-8837-3e9272e1f318","Type":"ContainerDied","Data":"462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7"} Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.505069 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzmnd" event={"ID":"7e39a118-0362-4129-8837-3e9272e1f318","Type":"ContainerDied","Data":"d4575152972dc38839506137105ee89d8d68df3d770f62abb9a38b35c543c263"} Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.505096 4895 scope.go:117] "RemoveContainer" containerID="462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.525058 4895 scope.go:117] "RemoveContainer" containerID="da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.544922 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzmnd"] Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.577299 4895 scope.go:117] "RemoveContainer" containerID="e85a892680cd54306bbbb2e921c6347c606ff395712a556888cc543ecfb9abbd" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.580145 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzmnd"] Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.613084 4895 scope.go:117] "RemoveContainer" containerID="462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7" Jan 29 17:19:36 crc kubenswrapper[4895]: E0129 17:19:36.613768 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7\": container with ID starting with 462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7 not found: ID does not exist" containerID="462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.613805 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7"} err="failed to get container status \"462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7\": rpc error: code = NotFound desc = could not find container \"462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7\": container with ID starting with 462b9eed1e7d5b57c8041dd4aa17910375a642c6e219df2c6c0f0924047304b7 not found: ID does not exist" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.613827 4895 scope.go:117] "RemoveContainer" containerID="da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef" Jan 29 17:19:36 crc kubenswrapper[4895]: E0129 17:19:36.614225 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef\": container with ID starting with da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef not found: ID does not exist" containerID="da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.614277 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef"} err="failed to get container status \"da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef\": rpc error: code = NotFound desc = could not find container \"da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef\": container with ID starting with da0bc7d4af46512484b747408b47be76a87f60e14dc0e84230fdaea56ae374ef not found: ID does not exist" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.614332 4895 scope.go:117] "RemoveContainer" containerID="e85a892680cd54306bbbb2e921c6347c606ff395712a556888cc543ecfb9abbd" Jan 29 17:19:36 crc kubenswrapper[4895]: E0129 17:19:36.614628 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e85a892680cd54306bbbb2e921c6347c606ff395712a556888cc543ecfb9abbd\": container with ID starting with e85a892680cd54306bbbb2e921c6347c606ff395712a556888cc543ecfb9abbd not found: ID does not exist" containerID="e85a892680cd54306bbbb2e921c6347c606ff395712a556888cc543ecfb9abbd" Jan 29 17:19:36 crc kubenswrapper[4895]: I0129 17:19:36.614648 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e85a892680cd54306bbbb2e921c6347c606ff395712a556888cc543ecfb9abbd"} err="failed to get container status \"e85a892680cd54306bbbb2e921c6347c606ff395712a556888cc543ecfb9abbd\": rpc error: code = NotFound desc = could not find container \"e85a892680cd54306bbbb2e921c6347c606ff395712a556888cc543ecfb9abbd\": container with ID starting with e85a892680cd54306bbbb2e921c6347c606ff395712a556888cc543ecfb9abbd not found: ID does not exist" Jan 29 17:19:37 crc kubenswrapper[4895]: I0129 17:19:37.053112 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e39a118-0362-4129-8837-3e9272e1f318" path="/var/lib/kubelet/pods/7e39a118-0362-4129-8837-3e9272e1f318/volumes" Jan 29 17:19:51 crc kubenswrapper[4895]: I0129 17:19:51.037138 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:19:51 crc kubenswrapper[4895]: E0129 17:19:51.038604 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:20:03 crc kubenswrapper[4895]: I0129 17:20:03.037725 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:20:03 crc kubenswrapper[4895]: E0129 17:20:03.038690 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:20:18 crc kubenswrapper[4895]: I0129 17:20:18.036963 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:20:18 crc kubenswrapper[4895]: E0129 17:20:18.037833 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.760362 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7hns6"] Jan 29 17:20:22 crc kubenswrapper[4895]: E0129 17:20:22.761627 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e39a118-0362-4129-8837-3e9272e1f318" containerName="extract-utilities" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.761645 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e39a118-0362-4129-8837-3e9272e1f318" containerName="extract-utilities" Jan 29 17:20:22 crc kubenswrapper[4895]: E0129 17:20:22.761665 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e39a118-0362-4129-8837-3e9272e1f318" containerName="registry-server" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.761673 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e39a118-0362-4129-8837-3e9272e1f318" containerName="registry-server" Jan 29 17:20:22 crc kubenswrapper[4895]: E0129 17:20:22.761704 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e39a118-0362-4129-8837-3e9272e1f318" containerName="extract-content" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.761712 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e39a118-0362-4129-8837-3e9272e1f318" containerName="extract-content" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.761962 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e39a118-0362-4129-8837-3e9272e1f318" containerName="registry-server" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.763645 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.776590 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hns6"] Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.875689 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-catalog-content\") pod \"community-operators-7hns6\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.875807 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzd6w\" (UniqueName: \"kubernetes.io/projected/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-kube-api-access-gzd6w\") pod \"community-operators-7hns6\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.875902 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-utilities\") pod \"community-operators-7hns6\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.977332 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-utilities\") pod \"community-operators-7hns6\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.977474 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-catalog-content\") pod \"community-operators-7hns6\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.977542 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzd6w\" (UniqueName: \"kubernetes.io/projected/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-kube-api-access-gzd6w\") pod \"community-operators-7hns6\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.978171 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-utilities\") pod \"community-operators-7hns6\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:22 crc kubenswrapper[4895]: I0129 17:20:22.978228 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-catalog-content\") pod \"community-operators-7hns6\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:23 crc kubenswrapper[4895]: I0129 17:20:23.004696 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzd6w\" (UniqueName: \"kubernetes.io/projected/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-kube-api-access-gzd6w\") pod \"community-operators-7hns6\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:23 crc kubenswrapper[4895]: I0129 17:20:23.090159 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:23 crc kubenswrapper[4895]: I0129 17:20:23.679756 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hns6"] Jan 29 17:20:23 crc kubenswrapper[4895]: I0129 17:20:23.993743 4895 generic.go:334] "Generic (PLEG): container finished" podID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" containerID="6b831df338b2692eb81b416526d9a259bc597279ec5671545e66f5ffabcaf5bb" exitCode=0 Jan 29 17:20:23 crc kubenswrapper[4895]: I0129 17:20:23.993787 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hns6" event={"ID":"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed","Type":"ContainerDied","Data":"6b831df338b2692eb81b416526d9a259bc597279ec5671545e66f5ffabcaf5bb"} Jan 29 17:20:23 crc kubenswrapper[4895]: I0129 17:20:23.994195 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hns6" event={"ID":"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed","Type":"ContainerStarted","Data":"45bdf97a448d3923769bb4a156428e7fc0f430c3a4cfeeb873241cbdd8a21e0a"} Jan 29 17:20:25 crc kubenswrapper[4895]: I0129 17:20:25.004204 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hns6" event={"ID":"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed","Type":"ContainerStarted","Data":"13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118"} Jan 29 17:20:26 crc kubenswrapper[4895]: I0129 17:20:26.027922 4895 generic.go:334] "Generic (PLEG): container finished" podID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" containerID="13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118" exitCode=0 Jan 29 17:20:26 crc kubenswrapper[4895]: I0129 17:20:26.028007 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hns6" event={"ID":"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed","Type":"ContainerDied","Data":"13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118"} Jan 29 17:20:27 crc kubenswrapper[4895]: I0129 17:20:27.049019 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hns6" event={"ID":"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed","Type":"ContainerStarted","Data":"b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d"} Jan 29 17:20:27 crc kubenswrapper[4895]: I0129 17:20:27.067792 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7hns6" podStartSLOduration=2.536222019 podStartE2EDuration="5.067770851s" podCreationTimestamp="2026-01-29 17:20:22 +0000 UTC" firstStartedPulling="2026-01-29 17:20:23.997005429 +0000 UTC m=+4107.799982693" lastFinishedPulling="2026-01-29 17:20:26.528554261 +0000 UTC m=+4110.331531525" observedRunningTime="2026-01-29 17:20:27.061859431 +0000 UTC m=+4110.864836715" watchObservedRunningTime="2026-01-29 17:20:27.067770851 +0000 UTC m=+4110.870748115" Jan 29 17:20:31 crc kubenswrapper[4895]: I0129 17:20:31.037568 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:20:31 crc kubenswrapper[4895]: E0129 17:20:31.038478 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:20:33 crc kubenswrapper[4895]: I0129 17:20:33.090248 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:33 crc kubenswrapper[4895]: I0129 17:20:33.090662 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:33 crc kubenswrapper[4895]: I0129 17:20:33.134143 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:33 crc kubenswrapper[4895]: I0129 17:20:33.189235 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:33 crc kubenswrapper[4895]: I0129 17:20:33.368479 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hns6"] Jan 29 17:20:35 crc kubenswrapper[4895]: I0129 17:20:35.125955 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7hns6" podUID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" containerName="registry-server" containerID="cri-o://b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d" gracePeriod=2 Jan 29 17:20:35 crc kubenswrapper[4895]: I0129 17:20:35.691554 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:35 crc kubenswrapper[4895]: I0129 17:20:35.869434 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzd6w\" (UniqueName: \"kubernetes.io/projected/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-kube-api-access-gzd6w\") pod \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " Jan 29 17:20:35 crc kubenswrapper[4895]: I0129 17:20:35.869540 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-catalog-content\") pod \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " Jan 29 17:20:35 crc kubenswrapper[4895]: I0129 17:20:35.869602 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-utilities\") pod \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\" (UID: \"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed\") " Jan 29 17:20:35 crc kubenswrapper[4895]: I0129 17:20:35.870652 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-utilities" (OuterVolumeSpecName: "utilities") pod "7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" (UID: "7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:20:35 crc kubenswrapper[4895]: I0129 17:20:35.878170 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-kube-api-access-gzd6w" (OuterVolumeSpecName: "kube-api-access-gzd6w") pod "7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" (UID: "7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed"). InnerVolumeSpecName "kube-api-access-gzd6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:20:35 crc kubenswrapper[4895]: I0129 17:20:35.972634 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzd6w\" (UniqueName: \"kubernetes.io/projected/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-kube-api-access-gzd6w\") on node \"crc\" DevicePath \"\"" Jan 29 17:20:35 crc kubenswrapper[4895]: I0129 17:20:35.972703 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.074463 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" (UID: "7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.074899 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.136784 4895 generic.go:334] "Generic (PLEG): container finished" podID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" containerID="b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d" exitCode=0 Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.136852 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hns6" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.136904 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hns6" event={"ID":"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed","Type":"ContainerDied","Data":"b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d"} Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.140992 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hns6" event={"ID":"7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed","Type":"ContainerDied","Data":"45bdf97a448d3923769bb4a156428e7fc0f430c3a4cfeeb873241cbdd8a21e0a"} Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.141018 4895 scope.go:117] "RemoveContainer" containerID="b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.173859 4895 scope.go:117] "RemoveContainer" containerID="13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.180033 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hns6"] Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.188303 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7hns6"] Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.420521 4895 scope.go:117] "RemoveContainer" containerID="6b831df338b2692eb81b416526d9a259bc597279ec5671545e66f5ffabcaf5bb" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.460111 4895 scope.go:117] "RemoveContainer" containerID="b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d" Jan 29 17:20:36 crc kubenswrapper[4895]: E0129 17:20:36.460539 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d\": container with ID starting with b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d not found: ID does not exist" containerID="b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.460572 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d"} err="failed to get container status \"b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d\": rpc error: code = NotFound desc = could not find container \"b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d\": container with ID starting with b9cb04cce7bc7418c2f263c91265726a6e66a06990260ad9c53eb9cd989da16d not found: ID does not exist" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.460594 4895 scope.go:117] "RemoveContainer" containerID="13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118" Jan 29 17:20:36 crc kubenswrapper[4895]: E0129 17:20:36.460839 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118\": container with ID starting with 13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118 not found: ID does not exist" containerID="13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.460884 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118"} err="failed to get container status \"13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118\": rpc error: code = NotFound desc = could not find container \"13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118\": container with ID starting with 13d8c7b1f42a4ece926c14076d93abac3abbba34369dd4ac541241ac0efdd118 not found: ID does not exist" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.460905 4895 scope.go:117] "RemoveContainer" containerID="6b831df338b2692eb81b416526d9a259bc597279ec5671545e66f5ffabcaf5bb" Jan 29 17:20:36 crc kubenswrapper[4895]: E0129 17:20:36.461200 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b831df338b2692eb81b416526d9a259bc597279ec5671545e66f5ffabcaf5bb\": container with ID starting with 6b831df338b2692eb81b416526d9a259bc597279ec5671545e66f5ffabcaf5bb not found: ID does not exist" containerID="6b831df338b2692eb81b416526d9a259bc597279ec5671545e66f5ffabcaf5bb" Jan 29 17:20:36 crc kubenswrapper[4895]: I0129 17:20:36.461241 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b831df338b2692eb81b416526d9a259bc597279ec5671545e66f5ffabcaf5bb"} err="failed to get container status \"6b831df338b2692eb81b416526d9a259bc597279ec5671545e66f5ffabcaf5bb\": rpc error: code = NotFound desc = could not find container \"6b831df338b2692eb81b416526d9a259bc597279ec5671545e66f5ffabcaf5bb\": container with ID starting with 6b831df338b2692eb81b416526d9a259bc597279ec5671545e66f5ffabcaf5bb not found: ID does not exist" Jan 29 17:20:37 crc kubenswrapper[4895]: I0129 17:20:37.046770 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" path="/var/lib/kubelet/pods/7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed/volumes" Jan 29 17:20:45 crc kubenswrapper[4895]: I0129 17:20:45.090663 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:20:45 crc kubenswrapper[4895]: E0129 17:20:45.091924 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:20:58 crc kubenswrapper[4895]: I0129 17:20:58.038222 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:20:59 crc kubenswrapper[4895]: I0129 17:20:59.276301 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"0d20ef6f3d76b820c58ac85becbcc108f9121804bc544948e34b90d436a00c9e"} Jan 29 17:23:27 crc kubenswrapper[4895]: I0129 17:23:27.823320 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:23:27 crc kubenswrapper[4895]: I0129 17:23:27.823894 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:23:57 crc kubenswrapper[4895]: I0129 17:23:57.823552 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:23:57 crc kubenswrapper[4895]: I0129 17:23:57.824584 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.323566 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jntrl"] Jan 29 17:24:15 crc kubenswrapper[4895]: E0129 17:24:15.324792 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" containerName="registry-server" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.324805 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" containerName="registry-server" Jan 29 17:24:15 crc kubenswrapper[4895]: E0129 17:24:15.324821 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" containerName="extract-utilities" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.324827 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" containerName="extract-utilities" Jan 29 17:24:15 crc kubenswrapper[4895]: E0129 17:24:15.324837 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" containerName="extract-content" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.324842 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" containerName="extract-content" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.325047 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7c1c3f-a8ba-4a0c-9e1f-09e4ee9595ed" containerName="registry-server" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.326576 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.335077 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jntrl"] Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.391637 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-catalog-content\") pod \"certified-operators-jntrl\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.391966 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-utilities\") pod \"certified-operators-jntrl\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.392218 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bg66\" (UniqueName: \"kubernetes.io/projected/e059f0b9-7460-41fa-a977-d48efc0118de-kube-api-access-9bg66\") pod \"certified-operators-jntrl\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.494345 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bg66\" (UniqueName: \"kubernetes.io/projected/e059f0b9-7460-41fa-a977-d48efc0118de-kube-api-access-9bg66\") pod \"certified-operators-jntrl\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.494396 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-catalog-content\") pod \"certified-operators-jntrl\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.494439 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-utilities\") pod \"certified-operators-jntrl\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.495051 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-utilities\") pod \"certified-operators-jntrl\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.495294 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-catalog-content\") pod \"certified-operators-jntrl\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.515346 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bg66\" (UniqueName: \"kubernetes.io/projected/e059f0b9-7460-41fa-a977-d48efc0118de-kube-api-access-9bg66\") pod \"certified-operators-jntrl\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:15 crc kubenswrapper[4895]: I0129 17:24:15.648001 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.137626 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jntrl"] Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.720565 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d4npr"] Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.723606 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.729261 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4npr"] Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.826491 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgghf\" (UniqueName: \"kubernetes.io/projected/a3541d1b-c37d-4fe1-9156-d75510696b0c-kube-api-access-kgghf\") pod \"redhat-operators-d4npr\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.826546 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-utilities\") pod \"redhat-operators-d4npr\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.826581 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-catalog-content\") pod \"redhat-operators-d4npr\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.928923 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgghf\" (UniqueName: \"kubernetes.io/projected/a3541d1b-c37d-4fe1-9156-d75510696b0c-kube-api-access-kgghf\") pod \"redhat-operators-d4npr\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.928984 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-utilities\") pod \"redhat-operators-d4npr\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.929016 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-catalog-content\") pod \"redhat-operators-d4npr\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.929569 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-utilities\") pod \"redhat-operators-d4npr\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.929601 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-catalog-content\") pod \"redhat-operators-d4npr\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.941198 4895 generic.go:334] "Generic (PLEG): container finished" podID="e059f0b9-7460-41fa-a977-d48efc0118de" containerID="7345ec0ad3f668406f38ee506198fd9707610868c7e6aafa9c7b2510a97ce9b6" exitCode=0 Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.941248 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jntrl" event={"ID":"e059f0b9-7460-41fa-a977-d48efc0118de","Type":"ContainerDied","Data":"7345ec0ad3f668406f38ee506198fd9707610868c7e6aafa9c7b2510a97ce9b6"} Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.941275 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jntrl" event={"ID":"e059f0b9-7460-41fa-a977-d48efc0118de","Type":"ContainerStarted","Data":"31c742aab9733804fb3cfca1855bd1adaf4318c46039a0321f0be3b4c1fafb40"} Jan 29 17:24:16 crc kubenswrapper[4895]: I0129 17:24:16.955961 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgghf\" (UniqueName: \"kubernetes.io/projected/a3541d1b-c37d-4fe1-9156-d75510696b0c-kube-api-access-kgghf\") pod \"redhat-operators-d4npr\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:24:17 crc kubenswrapper[4895]: I0129 17:24:17.054707 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:24:17 crc kubenswrapper[4895]: I0129 17:24:17.646892 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4npr"] Jan 29 17:24:17 crc kubenswrapper[4895]: I0129 17:24:17.951746 4895 generic.go:334] "Generic (PLEG): container finished" podID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerID="8525c5f32ae32f04c5d09cd5a8d2e73fdb0556c4c0b12d8066c84217e0db3e3e" exitCode=0 Jan 29 17:24:17 crc kubenswrapper[4895]: I0129 17:24:17.951909 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4npr" event={"ID":"a3541d1b-c37d-4fe1-9156-d75510696b0c","Type":"ContainerDied","Data":"8525c5f32ae32f04c5d09cd5a8d2e73fdb0556c4c0b12d8066c84217e0db3e3e"} Jan 29 17:24:17 crc kubenswrapper[4895]: I0129 17:24:17.952156 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4npr" event={"ID":"a3541d1b-c37d-4fe1-9156-d75510696b0c","Type":"ContainerStarted","Data":"229dd24f82159456d759a01f23160655e7ff9e956980bd24b884ad2bd82a10bd"} Jan 29 17:24:18 crc kubenswrapper[4895]: E0129 17:24:18.107511 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:24:18 crc kubenswrapper[4895]: E0129 17:24:18.107648 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgghf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-d4npr_openshift-marketplace(a3541d1b-c37d-4fe1-9156-d75510696b0c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:24:18 crc kubenswrapper[4895]: E0129 17:24:18.109420 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-d4npr" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" Jan 29 17:24:18 crc kubenswrapper[4895]: I0129 17:24:18.971682 4895 generic.go:334] "Generic (PLEG): container finished" podID="e059f0b9-7460-41fa-a977-d48efc0118de" containerID="ea5c63f66abead65e398931df15c9dc77937a9f3e830a657274b5b972ca525be" exitCode=0 Jan 29 17:24:18 crc kubenswrapper[4895]: I0129 17:24:18.971751 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jntrl" event={"ID":"e059f0b9-7460-41fa-a977-d48efc0118de","Type":"ContainerDied","Data":"ea5c63f66abead65e398931df15c9dc77937a9f3e830a657274b5b972ca525be"} Jan 29 17:24:18 crc kubenswrapper[4895]: E0129 17:24:18.973756 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-d4npr" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" Jan 29 17:24:20 crc kubenswrapper[4895]: I0129 17:24:20.006231 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jntrl" event={"ID":"e059f0b9-7460-41fa-a977-d48efc0118de","Type":"ContainerStarted","Data":"831b5738cd4afe8ac5dfd6c129c80e880e1deefd0a6c905d5760960e24e884a2"} Jan 29 17:24:20 crc kubenswrapper[4895]: I0129 17:24:20.029884 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jntrl" podStartSLOduration=2.5570802500000003 podStartE2EDuration="5.029852214s" podCreationTimestamp="2026-01-29 17:24:15 +0000 UTC" firstStartedPulling="2026-01-29 17:24:16.94415672 +0000 UTC m=+4340.747133984" lastFinishedPulling="2026-01-29 17:24:19.416928684 +0000 UTC m=+4343.219905948" observedRunningTime="2026-01-29 17:24:20.025732453 +0000 UTC m=+4343.828709737" watchObservedRunningTime="2026-01-29 17:24:20.029852214 +0000 UTC m=+4343.832829478" Jan 29 17:24:25 crc kubenswrapper[4895]: I0129 17:24:25.648599 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:25 crc kubenswrapper[4895]: I0129 17:24:25.649035 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:25 crc kubenswrapper[4895]: I0129 17:24:25.699852 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:26 crc kubenswrapper[4895]: I0129 17:24:26.105096 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:26 crc kubenswrapper[4895]: I0129 17:24:26.153571 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jntrl"] Jan 29 17:24:27 crc kubenswrapper[4895]: I0129 17:24:27.822807 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:24:27 crc kubenswrapper[4895]: I0129 17:24:27.823736 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:24:27 crc kubenswrapper[4895]: I0129 17:24:27.823820 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 17:24:27 crc kubenswrapper[4895]: I0129 17:24:27.825148 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d20ef6f3d76b820c58ac85becbcc108f9121804bc544948e34b90d436a00c9e"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:24:27 crc kubenswrapper[4895]: I0129 17:24:27.825212 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://0d20ef6f3d76b820c58ac85becbcc108f9121804bc544948e34b90d436a00c9e" gracePeriod=600 Jan 29 17:24:28 crc kubenswrapper[4895]: I0129 17:24:28.080532 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="0d20ef6f3d76b820c58ac85becbcc108f9121804bc544948e34b90d436a00c9e" exitCode=0 Jan 29 17:24:28 crc kubenswrapper[4895]: I0129 17:24:28.080780 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"0d20ef6f3d76b820c58ac85becbcc108f9121804bc544948e34b90d436a00c9e"} Jan 29 17:24:28 crc kubenswrapper[4895]: I0129 17:24:28.081210 4895 scope.go:117] "RemoveContainer" containerID="11139b238c99eeaed98b2a6dd68181661604304e3ef0f38df00797eaa31861e6" Jan 29 17:24:28 crc kubenswrapper[4895]: I0129 17:24:28.081225 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jntrl" podUID="e059f0b9-7460-41fa-a977-d48efc0118de" containerName="registry-server" containerID="cri-o://831b5738cd4afe8ac5dfd6c129c80e880e1deefd0a6c905d5760960e24e884a2" gracePeriod=2 Jan 29 17:24:29 crc kubenswrapper[4895]: I0129 17:24:29.090716 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3"} Jan 29 17:24:29 crc kubenswrapper[4895]: I0129 17:24:29.093832 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jntrl" event={"ID":"e059f0b9-7460-41fa-a977-d48efc0118de","Type":"ContainerDied","Data":"831b5738cd4afe8ac5dfd6c129c80e880e1deefd0a6c905d5760960e24e884a2"} Jan 29 17:24:29 crc kubenswrapper[4895]: I0129 17:24:29.093909 4895 generic.go:334] "Generic (PLEG): container finished" podID="e059f0b9-7460-41fa-a977-d48efc0118de" containerID="831b5738cd4afe8ac5dfd6c129c80e880e1deefd0a6c905d5760960e24e884a2" exitCode=0 Jan 29 17:24:29 crc kubenswrapper[4895]: I0129 17:24:29.218125 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:29 crc kubenswrapper[4895]: I0129 17:24:29.285213 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-utilities\") pod \"e059f0b9-7460-41fa-a977-d48efc0118de\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " Jan 29 17:24:29 crc kubenswrapper[4895]: I0129 17:24:29.285319 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bg66\" (UniqueName: \"kubernetes.io/projected/e059f0b9-7460-41fa-a977-d48efc0118de-kube-api-access-9bg66\") pod \"e059f0b9-7460-41fa-a977-d48efc0118de\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " Jan 29 17:24:29 crc kubenswrapper[4895]: I0129 17:24:29.285353 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-catalog-content\") pod \"e059f0b9-7460-41fa-a977-d48efc0118de\" (UID: \"e059f0b9-7460-41fa-a977-d48efc0118de\") " Jan 29 17:24:29 crc kubenswrapper[4895]: I0129 17:24:29.286376 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-utilities" (OuterVolumeSpecName: "utilities") pod "e059f0b9-7460-41fa-a977-d48efc0118de" (UID: "e059f0b9-7460-41fa-a977-d48efc0118de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:24:29 crc kubenswrapper[4895]: I0129 17:24:29.293168 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e059f0b9-7460-41fa-a977-d48efc0118de-kube-api-access-9bg66" (OuterVolumeSpecName: "kube-api-access-9bg66") pod "e059f0b9-7460-41fa-a977-d48efc0118de" (UID: "e059f0b9-7460-41fa-a977-d48efc0118de"). InnerVolumeSpecName "kube-api-access-9bg66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:24:29 crc kubenswrapper[4895]: I0129 17:24:29.388970 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:24:29 crc kubenswrapper[4895]: I0129 17:24:29.389016 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bg66\" (UniqueName: \"kubernetes.io/projected/e059f0b9-7460-41fa-a977-d48efc0118de-kube-api-access-9bg66\") on node \"crc\" DevicePath \"\"" Jan 29 17:24:30 crc kubenswrapper[4895]: I0129 17:24:30.107781 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jntrl" event={"ID":"e059f0b9-7460-41fa-a977-d48efc0118de","Type":"ContainerDied","Data":"31c742aab9733804fb3cfca1855bd1adaf4318c46039a0321f0be3b4c1fafb40"} Jan 29 17:24:30 crc kubenswrapper[4895]: I0129 17:24:30.107838 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jntrl" Jan 29 17:24:30 crc kubenswrapper[4895]: I0129 17:24:30.108248 4895 scope.go:117] "RemoveContainer" containerID="831b5738cd4afe8ac5dfd6c129c80e880e1deefd0a6c905d5760960e24e884a2" Jan 29 17:24:30 crc kubenswrapper[4895]: I0129 17:24:30.130229 4895 scope.go:117] "RemoveContainer" containerID="ea5c63f66abead65e398931df15c9dc77937a9f3e830a657274b5b972ca525be" Jan 29 17:24:30 crc kubenswrapper[4895]: I0129 17:24:30.176193 4895 scope.go:117] "RemoveContainer" containerID="7345ec0ad3f668406f38ee506198fd9707610868c7e6aafa9c7b2510a97ce9b6" Jan 29 17:24:30 crc kubenswrapper[4895]: I0129 17:24:30.190404 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e059f0b9-7460-41fa-a977-d48efc0118de" (UID: "e059f0b9-7460-41fa-a977-d48efc0118de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:24:30 crc kubenswrapper[4895]: I0129 17:24:30.204485 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e059f0b9-7460-41fa-a977-d48efc0118de-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:24:30 crc kubenswrapper[4895]: I0129 17:24:30.447028 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jntrl"] Jan 29 17:24:30 crc kubenswrapper[4895]: I0129 17:24:30.460356 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jntrl"] Jan 29 17:24:31 crc kubenswrapper[4895]: I0129 17:24:31.048720 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e059f0b9-7460-41fa-a977-d48efc0118de" path="/var/lib/kubelet/pods/e059f0b9-7460-41fa-a977-d48efc0118de/volumes" Jan 29 17:24:34 crc kubenswrapper[4895]: I0129 17:24:34.040614 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:24:34 crc kubenswrapper[4895]: E0129 17:24:34.169353 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:24:34 crc kubenswrapper[4895]: E0129 17:24:34.169501 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgghf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-d4npr_openshift-marketplace(a3541d1b-c37d-4fe1-9156-d75510696b0c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:24:34 crc kubenswrapper[4895]: E0129 17:24:34.170692 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-d4npr" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" Jan 29 17:24:48 crc kubenswrapper[4895]: E0129 17:24:48.039124 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-d4npr" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" Jan 29 17:25:01 crc kubenswrapper[4895]: E0129 17:25:01.200603 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:25:01 crc kubenswrapper[4895]: E0129 17:25:01.201406 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgghf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-d4npr_openshift-marketplace(a3541d1b-c37d-4fe1-9156-d75510696b0c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:25:01 crc kubenswrapper[4895]: E0129 17:25:01.202853 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-d4npr" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" Jan 29 17:25:12 crc kubenswrapper[4895]: E0129 17:25:12.039695 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-d4npr" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" Jan 29 17:25:27 crc kubenswrapper[4895]: E0129 17:25:27.044191 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-d4npr" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" Jan 29 17:25:39 crc kubenswrapper[4895]: E0129 17:25:39.040535 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-d4npr" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" Jan 29 17:25:51 crc kubenswrapper[4895]: I0129 17:25:51.830082 4895 generic.go:334] "Generic (PLEG): container finished" podID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerID="5477cc80f34b67ab3111a4110ebb47568d7a206d12ef43fa0053d84397d899cb" exitCode=0 Jan 29 17:25:51 crc kubenswrapper[4895]: I0129 17:25:51.830234 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4npr" event={"ID":"a3541d1b-c37d-4fe1-9156-d75510696b0c","Type":"ContainerDied","Data":"5477cc80f34b67ab3111a4110ebb47568d7a206d12ef43fa0053d84397d899cb"} Jan 29 17:25:53 crc kubenswrapper[4895]: I0129 17:25:53.850654 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4npr" event={"ID":"a3541d1b-c37d-4fe1-9156-d75510696b0c","Type":"ContainerStarted","Data":"35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97"} Jan 29 17:25:53 crc kubenswrapper[4895]: I0129 17:25:53.884996 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d4npr" podStartSLOduration=3.198240323 podStartE2EDuration="1m37.884968646s" podCreationTimestamp="2026-01-29 17:24:16 +0000 UTC" firstStartedPulling="2026-01-29 17:24:17.986407804 +0000 UTC m=+4341.789385068" lastFinishedPulling="2026-01-29 17:25:52.673136097 +0000 UTC m=+4436.476113391" observedRunningTime="2026-01-29 17:25:53.87766743 +0000 UTC m=+4437.680644704" watchObservedRunningTime="2026-01-29 17:25:53.884968646 +0000 UTC m=+4437.687945940" Jan 29 17:25:57 crc kubenswrapper[4895]: I0129 17:25:57.087990 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:25:57 crc kubenswrapper[4895]: I0129 17:25:57.088642 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:25:58 crc kubenswrapper[4895]: I0129 17:25:58.114779 4895 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d4npr" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerName="registry-server" probeResult="failure" output=< Jan 29 17:25:58 crc kubenswrapper[4895]: timeout: failed to connect service ":50051" within 1s Jan 29 17:25:58 crc kubenswrapper[4895]: > Jan 29 17:26:07 crc kubenswrapper[4895]: I0129 17:26:07.100104 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:26:07 crc kubenswrapper[4895]: I0129 17:26:07.144647 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:26:07 crc kubenswrapper[4895]: I0129 17:26:07.333176 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4npr"] Jan 29 17:26:08 crc kubenswrapper[4895]: I0129 17:26:08.966559 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d4npr" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerName="registry-server" containerID="cri-o://35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97" gracePeriod=2 Jan 29 17:26:09 crc kubenswrapper[4895]: I0129 17:26:09.879902 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:26:09 crc kubenswrapper[4895]: I0129 17:26:09.938076 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-utilities\") pod \"a3541d1b-c37d-4fe1-9156-d75510696b0c\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " Jan 29 17:26:09 crc kubenswrapper[4895]: I0129 17:26:09.938231 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgghf\" (UniqueName: \"kubernetes.io/projected/a3541d1b-c37d-4fe1-9156-d75510696b0c-kube-api-access-kgghf\") pod \"a3541d1b-c37d-4fe1-9156-d75510696b0c\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " Jan 29 17:26:09 crc kubenswrapper[4895]: I0129 17:26:09.938315 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-catalog-content\") pod \"a3541d1b-c37d-4fe1-9156-d75510696b0c\" (UID: \"a3541d1b-c37d-4fe1-9156-d75510696b0c\") " Jan 29 17:26:09 crc kubenswrapper[4895]: I0129 17:26:09.939318 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-utilities" (OuterVolumeSpecName: "utilities") pod "a3541d1b-c37d-4fe1-9156-d75510696b0c" (UID: "a3541d1b-c37d-4fe1-9156-d75510696b0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:26:09 crc kubenswrapper[4895]: I0129 17:26:09.945263 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3541d1b-c37d-4fe1-9156-d75510696b0c-kube-api-access-kgghf" (OuterVolumeSpecName: "kube-api-access-kgghf") pod "a3541d1b-c37d-4fe1-9156-d75510696b0c" (UID: "a3541d1b-c37d-4fe1-9156-d75510696b0c"). InnerVolumeSpecName "kube-api-access-kgghf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:26:09 crc kubenswrapper[4895]: I0129 17:26:09.976923 4895 generic.go:334] "Generic (PLEG): container finished" podID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerID="35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97" exitCode=0 Jan 29 17:26:09 crc kubenswrapper[4895]: I0129 17:26:09.976979 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4npr" event={"ID":"a3541d1b-c37d-4fe1-9156-d75510696b0c","Type":"ContainerDied","Data":"35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97"} Jan 29 17:26:09 crc kubenswrapper[4895]: I0129 17:26:09.977008 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4npr" Jan 29 17:26:09 crc kubenswrapper[4895]: I0129 17:26:09.977024 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4npr" event={"ID":"a3541d1b-c37d-4fe1-9156-d75510696b0c","Type":"ContainerDied","Data":"229dd24f82159456d759a01f23160655e7ff9e956980bd24b884ad2bd82a10bd"} Jan 29 17:26:09 crc kubenswrapper[4895]: I0129 17:26:09.977050 4895 scope.go:117] "RemoveContainer" containerID="35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.019255 4895 scope.go:117] "RemoveContainer" containerID="5477cc80f34b67ab3111a4110ebb47568d7a206d12ef43fa0053d84397d899cb" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.039784 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.040000 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgghf\" (UniqueName: \"kubernetes.io/projected/a3541d1b-c37d-4fe1-9156-d75510696b0c-kube-api-access-kgghf\") on node \"crc\" DevicePath \"\"" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.058341 4895 scope.go:117] "RemoveContainer" containerID="8525c5f32ae32f04c5d09cd5a8d2e73fdb0556c4c0b12d8066c84217e0db3e3e" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.070594 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3541d1b-c37d-4fe1-9156-d75510696b0c" (UID: "a3541d1b-c37d-4fe1-9156-d75510696b0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.098165 4895 scope.go:117] "RemoveContainer" containerID="35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97" Jan 29 17:26:10 crc kubenswrapper[4895]: E0129 17:26:10.098653 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97\": container with ID starting with 35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97 not found: ID does not exist" containerID="35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.098683 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97"} err="failed to get container status \"35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97\": rpc error: code = NotFound desc = could not find container \"35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97\": container with ID starting with 35de604fdc4fe40eb7b81f626df8af173488c8dde25c7905d568a8d38a2a1a97 not found: ID does not exist" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.098704 4895 scope.go:117] "RemoveContainer" containerID="5477cc80f34b67ab3111a4110ebb47568d7a206d12ef43fa0053d84397d899cb" Jan 29 17:26:10 crc kubenswrapper[4895]: E0129 17:26:10.099158 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5477cc80f34b67ab3111a4110ebb47568d7a206d12ef43fa0053d84397d899cb\": container with ID starting with 5477cc80f34b67ab3111a4110ebb47568d7a206d12ef43fa0053d84397d899cb not found: ID does not exist" containerID="5477cc80f34b67ab3111a4110ebb47568d7a206d12ef43fa0053d84397d899cb" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.099215 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5477cc80f34b67ab3111a4110ebb47568d7a206d12ef43fa0053d84397d899cb"} err="failed to get container status \"5477cc80f34b67ab3111a4110ebb47568d7a206d12ef43fa0053d84397d899cb\": rpc error: code = NotFound desc = could not find container \"5477cc80f34b67ab3111a4110ebb47568d7a206d12ef43fa0053d84397d899cb\": container with ID starting with 5477cc80f34b67ab3111a4110ebb47568d7a206d12ef43fa0053d84397d899cb not found: ID does not exist" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.099250 4895 scope.go:117] "RemoveContainer" containerID="8525c5f32ae32f04c5d09cd5a8d2e73fdb0556c4c0b12d8066c84217e0db3e3e" Jan 29 17:26:10 crc kubenswrapper[4895]: E0129 17:26:10.099584 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8525c5f32ae32f04c5d09cd5a8d2e73fdb0556c4c0b12d8066c84217e0db3e3e\": container with ID starting with 8525c5f32ae32f04c5d09cd5a8d2e73fdb0556c4c0b12d8066c84217e0db3e3e not found: ID does not exist" containerID="8525c5f32ae32f04c5d09cd5a8d2e73fdb0556c4c0b12d8066c84217e0db3e3e" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.099625 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8525c5f32ae32f04c5d09cd5a8d2e73fdb0556c4c0b12d8066c84217e0db3e3e"} err="failed to get container status \"8525c5f32ae32f04c5d09cd5a8d2e73fdb0556c4c0b12d8066c84217e0db3e3e\": rpc error: code = NotFound desc = could not find container \"8525c5f32ae32f04c5d09cd5a8d2e73fdb0556c4c0b12d8066c84217e0db3e3e\": container with ID starting with 8525c5f32ae32f04c5d09cd5a8d2e73fdb0556c4c0b12d8066c84217e0db3e3e not found: ID does not exist" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.141448 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3541d1b-c37d-4fe1-9156-d75510696b0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.324692 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4npr"] Jan 29 17:26:10 crc kubenswrapper[4895]: I0129 17:26:10.349546 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d4npr"] Jan 29 17:26:11 crc kubenswrapper[4895]: I0129 17:26:11.049272 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" path="/var/lib/kubelet/pods/a3541d1b-c37d-4fe1-9156-d75510696b0c/volumes" Jan 29 17:26:57 crc kubenswrapper[4895]: I0129 17:26:57.823245 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:26:57 crc kubenswrapper[4895]: I0129 17:26:57.823929 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:27:27 crc kubenswrapper[4895]: I0129 17:27:27.823567 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:27:27 crc kubenswrapper[4895]: I0129 17:27:27.824324 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:27:57 crc kubenswrapper[4895]: I0129 17:27:57.823897 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:27:57 crc kubenswrapper[4895]: I0129 17:27:57.824834 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:27:57 crc kubenswrapper[4895]: I0129 17:27:57.824963 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 17:27:57 crc kubenswrapper[4895]: I0129 17:27:57.826416 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:27:57 crc kubenswrapper[4895]: I0129 17:27:57.826509 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" gracePeriod=600 Jan 29 17:27:58 crc kubenswrapper[4895]: E0129 17:27:58.233789 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:27:58 crc kubenswrapper[4895]: I0129 17:27:58.909262 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" exitCode=0 Jan 29 17:27:58 crc kubenswrapper[4895]: I0129 17:27:58.909801 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3"} Jan 29 17:27:58 crc kubenswrapper[4895]: I0129 17:27:58.909836 4895 scope.go:117] "RemoveContainer" containerID="0d20ef6f3d76b820c58ac85becbcc108f9121804bc544948e34b90d436a00c9e" Jan 29 17:27:58 crc kubenswrapper[4895]: I0129 17:27:58.910577 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:27:58 crc kubenswrapper[4895]: E0129 17:27:58.910854 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:28:14 crc kubenswrapper[4895]: I0129 17:28:14.037505 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:28:14 crc kubenswrapper[4895]: E0129 17:28:14.038382 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:28:28 crc kubenswrapper[4895]: I0129 17:28:28.036822 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:28:28 crc kubenswrapper[4895]: E0129 17:28:28.039259 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:28:41 crc kubenswrapper[4895]: I0129 17:28:41.037272 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:28:41 crc kubenswrapper[4895]: E0129 17:28:41.038230 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:28:54 crc kubenswrapper[4895]: I0129 17:28:54.037914 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:28:54 crc kubenswrapper[4895]: E0129 17:28:54.039444 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:29:08 crc kubenswrapper[4895]: I0129 17:29:08.038804 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:29:08 crc kubenswrapper[4895]: E0129 17:29:08.040917 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:29:21 crc kubenswrapper[4895]: I0129 17:29:21.037232 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:29:21 crc kubenswrapper[4895]: E0129 17:29:21.038280 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.646953 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c2zn5"] Jan 29 17:29:29 crc kubenswrapper[4895]: E0129 17:29:29.649225 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerName="registry-server" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.649257 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerName="registry-server" Jan 29 17:29:29 crc kubenswrapper[4895]: E0129 17:29:29.649271 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e059f0b9-7460-41fa-a977-d48efc0118de" containerName="registry-server" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.649278 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e059f0b9-7460-41fa-a977-d48efc0118de" containerName="registry-server" Jan 29 17:29:29 crc kubenswrapper[4895]: E0129 17:29:29.649294 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e059f0b9-7460-41fa-a977-d48efc0118de" containerName="extract-content" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.649300 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e059f0b9-7460-41fa-a977-d48efc0118de" containerName="extract-content" Jan 29 17:29:29 crc kubenswrapper[4895]: E0129 17:29:29.649314 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerName="extract-content" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.649320 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerName="extract-content" Jan 29 17:29:29 crc kubenswrapper[4895]: E0129 17:29:29.649332 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerName="extract-utilities" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.649339 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerName="extract-utilities" Jan 29 17:29:29 crc kubenswrapper[4895]: E0129 17:29:29.649349 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e059f0b9-7460-41fa-a977-d48efc0118de" containerName="extract-utilities" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.649355 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e059f0b9-7460-41fa-a977-d48efc0118de" containerName="extract-utilities" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.649548 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e059f0b9-7460-41fa-a977-d48efc0118de" containerName="registry-server" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.649562 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3541d1b-c37d-4fe1-9156-d75510696b0c" containerName="registry-server" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.651949 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.658997 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2zn5"] Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.679814 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-utilities\") pod \"redhat-marketplace-c2zn5\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.781663 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-catalog-content\") pod \"redhat-marketplace-c2zn5\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.782033 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-utilities\") pod \"redhat-marketplace-c2zn5\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.782386 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbmj\" (UniqueName: \"kubernetes.io/projected/654191c1-5cb4-426e-8311-a18b486563ca-kube-api-access-7zbmj\") pod \"redhat-marketplace-c2zn5\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.782659 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-utilities\") pod \"redhat-marketplace-c2zn5\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.884554 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbmj\" (UniqueName: \"kubernetes.io/projected/654191c1-5cb4-426e-8311-a18b486563ca-kube-api-access-7zbmj\") pod \"redhat-marketplace-c2zn5\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.884637 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-catalog-content\") pod \"redhat-marketplace-c2zn5\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.885148 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-catalog-content\") pod \"redhat-marketplace-c2zn5\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.911699 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbmj\" (UniqueName: \"kubernetes.io/projected/654191c1-5cb4-426e-8311-a18b486563ca-kube-api-access-7zbmj\") pod \"redhat-marketplace-c2zn5\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:29 crc kubenswrapper[4895]: I0129 17:29:29.979269 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:30 crc kubenswrapper[4895]: I0129 17:29:30.431430 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2zn5"] Jan 29 17:29:30 crc kubenswrapper[4895]: I0129 17:29:30.685487 4895 generic.go:334] "Generic (PLEG): container finished" podID="654191c1-5cb4-426e-8311-a18b486563ca" containerID="094d302de1eac78e2bbf5235ec148df0ad256cd9233af6284cae4c8da920a20b" exitCode=0 Jan 29 17:29:30 crc kubenswrapper[4895]: I0129 17:29:30.685541 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2zn5" event={"ID":"654191c1-5cb4-426e-8311-a18b486563ca","Type":"ContainerDied","Data":"094d302de1eac78e2bbf5235ec148df0ad256cd9233af6284cae4c8da920a20b"} Jan 29 17:29:30 crc kubenswrapper[4895]: I0129 17:29:30.685589 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2zn5" event={"ID":"654191c1-5cb4-426e-8311-a18b486563ca","Type":"ContainerStarted","Data":"6ede027e16eb19dc10fea68870b7ef006debc61a8a2aca9f5759aaacc1ef56a7"} Jan 29 17:29:32 crc kubenswrapper[4895]: I0129 17:29:32.709765 4895 generic.go:334] "Generic (PLEG): container finished" podID="654191c1-5cb4-426e-8311-a18b486563ca" containerID="05c7b4a7dd0a8dcdc22532c180af0820c4d3ee89aafe79be0a5cba57baf3e9a9" exitCode=0 Jan 29 17:29:32 crc kubenswrapper[4895]: I0129 17:29:32.710381 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2zn5" event={"ID":"654191c1-5cb4-426e-8311-a18b486563ca","Type":"ContainerDied","Data":"05c7b4a7dd0a8dcdc22532c180af0820c4d3ee89aafe79be0a5cba57baf3e9a9"} Jan 29 17:29:33 crc kubenswrapper[4895]: I0129 17:29:33.722860 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2zn5" event={"ID":"654191c1-5cb4-426e-8311-a18b486563ca","Type":"ContainerStarted","Data":"ea694682be6c5eb45ecfd0a904e983be4d8ec5e7285918b5fc2f43b64567e9cb"} Jan 29 17:29:33 crc kubenswrapper[4895]: I0129 17:29:33.747343 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c2zn5" podStartSLOduration=2.111022793 podStartE2EDuration="4.747321045s" podCreationTimestamp="2026-01-29 17:29:29 +0000 UTC" firstStartedPulling="2026-01-29 17:29:30.687236509 +0000 UTC m=+4654.490213763" lastFinishedPulling="2026-01-29 17:29:33.323534751 +0000 UTC m=+4657.126512015" observedRunningTime="2026-01-29 17:29:33.741747214 +0000 UTC m=+4657.544724488" watchObservedRunningTime="2026-01-29 17:29:33.747321045 +0000 UTC m=+4657.550298319" Jan 29 17:29:34 crc kubenswrapper[4895]: I0129 17:29:34.037398 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:29:34 crc kubenswrapper[4895]: E0129 17:29:34.037911 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:29:39 crc kubenswrapper[4895]: I0129 17:29:39.980378 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:39 crc kubenswrapper[4895]: I0129 17:29:39.981349 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:40 crc kubenswrapper[4895]: I0129 17:29:40.059243 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:41 crc kubenswrapper[4895]: I0129 17:29:41.146781 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:41 crc kubenswrapper[4895]: I0129 17:29:41.205682 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2zn5"] Jan 29 17:29:42 crc kubenswrapper[4895]: I0129 17:29:42.828096 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c2zn5" podUID="654191c1-5cb4-426e-8311-a18b486563ca" containerName="registry-server" containerID="cri-o://ea694682be6c5eb45ecfd0a904e983be4d8ec5e7285918b5fc2f43b64567e9cb" gracePeriod=2 Jan 29 17:29:43 crc kubenswrapper[4895]: I0129 17:29:43.840759 4895 generic.go:334] "Generic (PLEG): container finished" podID="654191c1-5cb4-426e-8311-a18b486563ca" containerID="ea694682be6c5eb45ecfd0a904e983be4d8ec5e7285918b5fc2f43b64567e9cb" exitCode=0 Jan 29 17:29:43 crc kubenswrapper[4895]: I0129 17:29:43.840939 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2zn5" event={"ID":"654191c1-5cb4-426e-8311-a18b486563ca","Type":"ContainerDied","Data":"ea694682be6c5eb45ecfd0a904e983be4d8ec5e7285918b5fc2f43b64567e9cb"} Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.030092 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.089942 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-utilities\") pod \"654191c1-5cb4-426e-8311-a18b486563ca\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.090174 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zbmj\" (UniqueName: \"kubernetes.io/projected/654191c1-5cb4-426e-8311-a18b486563ca-kube-api-access-7zbmj\") pod \"654191c1-5cb4-426e-8311-a18b486563ca\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.090263 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-catalog-content\") pod \"654191c1-5cb4-426e-8311-a18b486563ca\" (UID: \"654191c1-5cb4-426e-8311-a18b486563ca\") " Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.091220 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-utilities" (OuterVolumeSpecName: "utilities") pod "654191c1-5cb4-426e-8311-a18b486563ca" (UID: "654191c1-5cb4-426e-8311-a18b486563ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.097448 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654191c1-5cb4-426e-8311-a18b486563ca-kube-api-access-7zbmj" (OuterVolumeSpecName: "kube-api-access-7zbmj") pod "654191c1-5cb4-426e-8311-a18b486563ca" (UID: "654191c1-5cb4-426e-8311-a18b486563ca"). InnerVolumeSpecName "kube-api-access-7zbmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.119404 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "654191c1-5cb4-426e-8311-a18b486563ca" (UID: "654191c1-5cb4-426e-8311-a18b486563ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.192727 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zbmj\" (UniqueName: \"kubernetes.io/projected/654191c1-5cb4-426e-8311-a18b486563ca-kube-api-access-7zbmj\") on node \"crc\" DevicePath \"\"" Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.192769 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.192779 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/654191c1-5cb4-426e-8311-a18b486563ca-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.852363 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2zn5" event={"ID":"654191c1-5cb4-426e-8311-a18b486563ca","Type":"ContainerDied","Data":"6ede027e16eb19dc10fea68870b7ef006debc61a8a2aca9f5759aaacc1ef56a7"} Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.852436 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2zn5" Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.852758 4895 scope.go:117] "RemoveContainer" containerID="ea694682be6c5eb45ecfd0a904e983be4d8ec5e7285918b5fc2f43b64567e9cb" Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.882694 4895 scope.go:117] "RemoveContainer" containerID="05c7b4a7dd0a8dcdc22532c180af0820c4d3ee89aafe79be0a5cba57baf3e9a9" Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.890785 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2zn5"] Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.899722 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2zn5"] Jan 29 17:29:44 crc kubenswrapper[4895]: I0129 17:29:44.918040 4895 scope.go:117] "RemoveContainer" containerID="094d302de1eac78e2bbf5235ec148df0ad256cd9233af6284cae4c8da920a20b" Jan 29 17:29:45 crc kubenswrapper[4895]: I0129 17:29:45.048433 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654191c1-5cb4-426e-8311-a18b486563ca" path="/var/lib/kubelet/pods/654191c1-5cb4-426e-8311-a18b486563ca/volumes" Jan 29 17:29:48 crc kubenswrapper[4895]: I0129 17:29:48.036761 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:29:48 crc kubenswrapper[4895]: E0129 17:29:48.037774 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:29:59 crc kubenswrapper[4895]: I0129 17:29:59.037484 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:29:59 crc kubenswrapper[4895]: E0129 17:29:59.038506 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.169742 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl"] Jan 29 17:30:00 crc kubenswrapper[4895]: E0129 17:30:00.170368 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654191c1-5cb4-426e-8311-a18b486563ca" containerName="extract-utilities" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.170408 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="654191c1-5cb4-426e-8311-a18b486563ca" containerName="extract-utilities" Jan 29 17:30:00 crc kubenswrapper[4895]: E0129 17:30:00.170425 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654191c1-5cb4-426e-8311-a18b486563ca" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.170434 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="654191c1-5cb4-426e-8311-a18b486563ca" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4895]: E0129 17:30:00.170448 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654191c1-5cb4-426e-8311-a18b486563ca" containerName="extract-content" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.170455 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="654191c1-5cb4-426e-8311-a18b486563ca" containerName="extract-content" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.170645 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="654191c1-5cb4-426e-8311-a18b486563ca" containerName="registry-server" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.171605 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.173757 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.174057 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.186250 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl"] Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.321989 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c603d866-567b-43b2-9d73-25e458ae59dd-secret-volume\") pod \"collect-profiles-29495130-q67vl\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.322116 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84k6c\" (UniqueName: \"kubernetes.io/projected/c603d866-567b-43b2-9d73-25e458ae59dd-kube-api-access-84k6c\") pod \"collect-profiles-29495130-q67vl\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.322182 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c603d866-567b-43b2-9d73-25e458ae59dd-config-volume\") pod \"collect-profiles-29495130-q67vl\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.424016 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c603d866-567b-43b2-9d73-25e458ae59dd-config-volume\") pod \"collect-profiles-29495130-q67vl\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.424443 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c603d866-567b-43b2-9d73-25e458ae59dd-secret-volume\") pod \"collect-profiles-29495130-q67vl\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.424536 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84k6c\" (UniqueName: \"kubernetes.io/projected/c603d866-567b-43b2-9d73-25e458ae59dd-kube-api-access-84k6c\") pod \"collect-profiles-29495130-q67vl\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.425055 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c603d866-567b-43b2-9d73-25e458ae59dd-config-volume\") pod \"collect-profiles-29495130-q67vl\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.436106 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c603d866-567b-43b2-9d73-25e458ae59dd-secret-volume\") pod \"collect-profiles-29495130-q67vl\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.439793 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84k6c\" (UniqueName: \"kubernetes.io/projected/c603d866-567b-43b2-9d73-25e458ae59dd-kube-api-access-84k6c\") pod \"collect-profiles-29495130-q67vl\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.498034 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:00 crc kubenswrapper[4895]: I0129 17:30:00.952231 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl"] Jan 29 17:30:01 crc kubenswrapper[4895]: I0129 17:30:01.011900 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" event={"ID":"c603d866-567b-43b2-9d73-25e458ae59dd","Type":"ContainerStarted","Data":"d7a7dd01638f2942b2d986af815f352d7ebdcb9e10d15f407955414f8ee7d573"} Jan 29 17:30:02 crc kubenswrapper[4895]: I0129 17:30:02.023593 4895 generic.go:334] "Generic (PLEG): container finished" podID="c603d866-567b-43b2-9d73-25e458ae59dd" containerID="12cf5af24151ebdd940e2836e7e16a82650cd72c2324188bc6e1b915ffd1fd43" exitCode=0 Jan 29 17:30:02 crc kubenswrapper[4895]: I0129 17:30:02.023715 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" event={"ID":"c603d866-567b-43b2-9d73-25e458ae59dd","Type":"ContainerDied","Data":"12cf5af24151ebdd940e2836e7e16a82650cd72c2324188bc6e1b915ffd1fd43"} Jan 29 17:30:03 crc kubenswrapper[4895]: I0129 17:30:03.480154 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:03 crc kubenswrapper[4895]: I0129 17:30:03.588190 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84k6c\" (UniqueName: \"kubernetes.io/projected/c603d866-567b-43b2-9d73-25e458ae59dd-kube-api-access-84k6c\") pod \"c603d866-567b-43b2-9d73-25e458ae59dd\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " Jan 29 17:30:03 crc kubenswrapper[4895]: I0129 17:30:03.588300 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c603d866-567b-43b2-9d73-25e458ae59dd-config-volume\") pod \"c603d866-567b-43b2-9d73-25e458ae59dd\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " Jan 29 17:30:03 crc kubenswrapper[4895]: I0129 17:30:03.588397 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c603d866-567b-43b2-9d73-25e458ae59dd-secret-volume\") pod \"c603d866-567b-43b2-9d73-25e458ae59dd\" (UID: \"c603d866-567b-43b2-9d73-25e458ae59dd\") " Jan 29 17:30:03 crc kubenswrapper[4895]: I0129 17:30:03.590334 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c603d866-567b-43b2-9d73-25e458ae59dd-config-volume" (OuterVolumeSpecName: "config-volume") pod "c603d866-567b-43b2-9d73-25e458ae59dd" (UID: "c603d866-567b-43b2-9d73-25e458ae59dd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4895]: I0129 17:30:03.596998 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c603d866-567b-43b2-9d73-25e458ae59dd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c603d866-567b-43b2-9d73-25e458ae59dd" (UID: "c603d866-567b-43b2-9d73-25e458ae59dd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4895]: I0129 17:30:03.598661 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c603d866-567b-43b2-9d73-25e458ae59dd-kube-api-access-84k6c" (OuterVolumeSpecName: "kube-api-access-84k6c") pod "c603d866-567b-43b2-9d73-25e458ae59dd" (UID: "c603d866-567b-43b2-9d73-25e458ae59dd"). InnerVolumeSpecName "kube-api-access-84k6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4895]: I0129 17:30:03.691487 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84k6c\" (UniqueName: \"kubernetes.io/projected/c603d866-567b-43b2-9d73-25e458ae59dd-kube-api-access-84k6c\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:03 crc kubenswrapper[4895]: I0129 17:30:03.691740 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c603d866-567b-43b2-9d73-25e458ae59dd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:03 crc kubenswrapper[4895]: I0129 17:30:03.691860 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c603d866-567b-43b2-9d73-25e458ae59dd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:04 crc kubenswrapper[4895]: I0129 17:30:04.045222 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" event={"ID":"c603d866-567b-43b2-9d73-25e458ae59dd","Type":"ContainerDied","Data":"d7a7dd01638f2942b2d986af815f352d7ebdcb9e10d15f407955414f8ee7d573"} Jan 29 17:30:04 crc kubenswrapper[4895]: I0129 17:30:04.045800 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7a7dd01638f2942b2d986af815f352d7ebdcb9e10d15f407955414f8ee7d573" Jan 29 17:30:04 crc kubenswrapper[4895]: I0129 17:30:04.045934 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-q67vl" Jan 29 17:30:04 crc kubenswrapper[4895]: I0129 17:30:04.554724 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh"] Jan 29 17:30:04 crc kubenswrapper[4895]: I0129 17:30:04.562211 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-7t2dh"] Jan 29 17:30:05 crc kubenswrapper[4895]: I0129 17:30:05.050937 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73" path="/var/lib/kubelet/pods/0febde8e-5aa6-4f4c-bc7f-92d2b0c91b73/volumes" Jan 29 17:30:10 crc kubenswrapper[4895]: I0129 17:30:10.036797 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:30:10 crc kubenswrapper[4895]: E0129 17:30:10.037706 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:30:20 crc kubenswrapper[4895]: I0129 17:30:20.099349 4895 scope.go:117] "RemoveContainer" containerID="92c6d1ac0d4adad01c597565213f0e3e78aff25743a51bc8187551e65a564017" Jan 29 17:30:21 crc kubenswrapper[4895]: I0129 17:30:21.036941 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:30:21 crc kubenswrapper[4895]: E0129 17:30:21.042847 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:30:34 crc kubenswrapper[4895]: I0129 17:30:34.036917 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:30:34 crc kubenswrapper[4895]: E0129 17:30:34.037709 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:30:46 crc kubenswrapper[4895]: I0129 17:30:46.036752 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:30:46 crc kubenswrapper[4895]: E0129 17:30:46.037491 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.662365 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9b8s7"] Jan 29 17:30:52 crc kubenswrapper[4895]: E0129 17:30:52.663657 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603d866-567b-43b2-9d73-25e458ae59dd" containerName="collect-profiles" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.663676 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603d866-567b-43b2-9d73-25e458ae59dd" containerName="collect-profiles" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.663931 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="c603d866-567b-43b2-9d73-25e458ae59dd" containerName="collect-profiles" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.665576 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.699398 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9b8s7"] Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.779931 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-catalog-content\") pod \"community-operators-9b8s7\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.780356 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2nrh\" (UniqueName: \"kubernetes.io/projected/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-kube-api-access-t2nrh\") pod \"community-operators-9b8s7\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.780409 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-utilities\") pod \"community-operators-9b8s7\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.882831 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-catalog-content\") pod \"community-operators-9b8s7\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.882984 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2nrh\" (UniqueName: \"kubernetes.io/projected/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-kube-api-access-t2nrh\") pod \"community-operators-9b8s7\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.883034 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-utilities\") pod \"community-operators-9b8s7\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.883333 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-catalog-content\") pod \"community-operators-9b8s7\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.883500 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-utilities\") pod \"community-operators-9b8s7\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:30:52 crc kubenswrapper[4895]: I0129 17:30:52.905371 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2nrh\" (UniqueName: \"kubernetes.io/projected/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-kube-api-access-t2nrh\") pod \"community-operators-9b8s7\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:30:53 crc kubenswrapper[4895]: I0129 17:30:53.004366 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:30:53 crc kubenswrapper[4895]: I0129 17:30:53.553609 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9b8s7"] Jan 29 17:30:54 crc kubenswrapper[4895]: I0129 17:30:54.492813 4895 generic.go:334] "Generic (PLEG): container finished" podID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" containerID="ca5ae8a905340eb85657c87d7b130c7a806244001aa06e398820b1e245756bc9" exitCode=0 Jan 29 17:30:54 crc kubenswrapper[4895]: I0129 17:30:54.492895 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b8s7" event={"ID":"5b798c5d-b2c4-457c-855b-926e2a4b7c6c","Type":"ContainerDied","Data":"ca5ae8a905340eb85657c87d7b130c7a806244001aa06e398820b1e245756bc9"} Jan 29 17:30:54 crc kubenswrapper[4895]: I0129 17:30:54.493245 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b8s7" event={"ID":"5b798c5d-b2c4-457c-855b-926e2a4b7c6c","Type":"ContainerStarted","Data":"c8ab808e4ddcf235d57b2c9cb6521c2c916b7cb8edd3d0143221394ee1e72cfa"} Jan 29 17:30:54 crc kubenswrapper[4895]: I0129 17:30:54.495709 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:30:55 crc kubenswrapper[4895]: I0129 17:30:55.505184 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b8s7" event={"ID":"5b798c5d-b2c4-457c-855b-926e2a4b7c6c","Type":"ContainerStarted","Data":"a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b"} Jan 29 17:30:57 crc kubenswrapper[4895]: I0129 17:30:57.525617 4895 generic.go:334] "Generic (PLEG): container finished" podID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" containerID="a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b" exitCode=0 Jan 29 17:30:57 crc kubenswrapper[4895]: I0129 17:30:57.525678 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b8s7" event={"ID":"5b798c5d-b2c4-457c-855b-926e2a4b7c6c","Type":"ContainerDied","Data":"a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b"} Jan 29 17:30:58 crc kubenswrapper[4895]: I0129 17:30:58.536436 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b8s7" event={"ID":"5b798c5d-b2c4-457c-855b-926e2a4b7c6c","Type":"ContainerStarted","Data":"942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44"} Jan 29 17:30:58 crc kubenswrapper[4895]: I0129 17:30:58.590382 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9b8s7" podStartSLOduration=3.126398278 podStartE2EDuration="6.590356776s" podCreationTimestamp="2026-01-29 17:30:52 +0000 UTC" firstStartedPulling="2026-01-29 17:30:54.49540719 +0000 UTC m=+4738.298384464" lastFinishedPulling="2026-01-29 17:30:57.959365658 +0000 UTC m=+4741.762342962" observedRunningTime="2026-01-29 17:30:58.554616408 +0000 UTC m=+4742.357593702" watchObservedRunningTime="2026-01-29 17:30:58.590356776 +0000 UTC m=+4742.393334060" Jan 29 17:31:01 crc kubenswrapper[4895]: I0129 17:31:01.037614 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:31:01 crc kubenswrapper[4895]: E0129 17:31:01.038741 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:31:03 crc kubenswrapper[4895]: I0129 17:31:03.005452 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:31:03 crc kubenswrapper[4895]: I0129 17:31:03.006092 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:31:03 crc kubenswrapper[4895]: I0129 17:31:03.068737 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:31:03 crc kubenswrapper[4895]: I0129 17:31:03.633232 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:31:03 crc kubenswrapper[4895]: I0129 17:31:03.699694 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9b8s7"] Jan 29 17:31:05 crc kubenswrapper[4895]: I0129 17:31:05.595494 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9b8s7" podUID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" containerName="registry-server" containerID="cri-o://942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44" gracePeriod=2 Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.357983 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.477969 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-utilities\") pod \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.478296 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-catalog-content\") pod \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.478460 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2nrh\" (UniqueName: \"kubernetes.io/projected/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-kube-api-access-t2nrh\") pod \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\" (UID: \"5b798c5d-b2c4-457c-855b-926e2a4b7c6c\") " Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.479413 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-utilities" (OuterVolumeSpecName: "utilities") pod "5b798c5d-b2c4-457c-855b-926e2a4b7c6c" (UID: "5b798c5d-b2c4-457c-855b-926e2a4b7c6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.489232 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-kube-api-access-t2nrh" (OuterVolumeSpecName: "kube-api-access-t2nrh") pod "5b798c5d-b2c4-457c-855b-926e2a4b7c6c" (UID: "5b798c5d-b2c4-457c-855b-926e2a4b7c6c"). InnerVolumeSpecName "kube-api-access-t2nrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.587233 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2nrh\" (UniqueName: \"kubernetes.io/projected/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-kube-api-access-t2nrh\") on node \"crc\" DevicePath \"\"" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.587733 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.587937 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b798c5d-b2c4-457c-855b-926e2a4b7c6c" (UID: "5b798c5d-b2c4-457c-855b-926e2a4b7c6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.614172 4895 generic.go:334] "Generic (PLEG): container finished" podID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" containerID="942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44" exitCode=0 Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.614248 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9b8s7" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.614258 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b8s7" event={"ID":"5b798c5d-b2c4-457c-855b-926e2a4b7c6c","Type":"ContainerDied","Data":"942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44"} Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.616271 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9b8s7" event={"ID":"5b798c5d-b2c4-457c-855b-926e2a4b7c6c","Type":"ContainerDied","Data":"c8ab808e4ddcf235d57b2c9cb6521c2c916b7cb8edd3d0143221394ee1e72cfa"} Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.616348 4895 scope.go:117] "RemoveContainer" containerID="942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.647039 4895 scope.go:117] "RemoveContainer" containerID="a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.689604 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b798c5d-b2c4-457c-855b-926e2a4b7c6c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.694909 4895 scope.go:117] "RemoveContainer" containerID="ca5ae8a905340eb85657c87d7b130c7a806244001aa06e398820b1e245756bc9" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.697634 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9b8s7"] Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.722316 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9b8s7"] Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.751441 4895 scope.go:117] "RemoveContainer" containerID="942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44" Jan 29 17:31:06 crc kubenswrapper[4895]: E0129 17:31:06.752566 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44\": container with ID starting with 942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44 not found: ID does not exist" containerID="942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.752619 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44"} err="failed to get container status \"942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44\": rpc error: code = NotFound desc = could not find container \"942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44\": container with ID starting with 942d90e9e85a64088334677560b2daf570d522f4069456d35a6c2df5730eae44 not found: ID does not exist" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.752648 4895 scope.go:117] "RemoveContainer" containerID="a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b" Jan 29 17:31:06 crc kubenswrapper[4895]: E0129 17:31:06.753324 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b\": container with ID starting with a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b not found: ID does not exist" containerID="a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.753425 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b"} err="failed to get container status \"a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b\": rpc error: code = NotFound desc = could not find container \"a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b\": container with ID starting with a4e26787352653b6fe085552f4a51d09873c0bb5e65637c9b52a3b33a62fd69b not found: ID does not exist" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.753473 4895 scope.go:117] "RemoveContainer" containerID="ca5ae8a905340eb85657c87d7b130c7a806244001aa06e398820b1e245756bc9" Jan 29 17:31:06 crc kubenswrapper[4895]: E0129 17:31:06.753940 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca5ae8a905340eb85657c87d7b130c7a806244001aa06e398820b1e245756bc9\": container with ID starting with ca5ae8a905340eb85657c87d7b130c7a806244001aa06e398820b1e245756bc9 not found: ID does not exist" containerID="ca5ae8a905340eb85657c87d7b130c7a806244001aa06e398820b1e245756bc9" Jan 29 17:31:06 crc kubenswrapper[4895]: I0129 17:31:06.753973 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca5ae8a905340eb85657c87d7b130c7a806244001aa06e398820b1e245756bc9"} err="failed to get container status \"ca5ae8a905340eb85657c87d7b130c7a806244001aa06e398820b1e245756bc9\": rpc error: code = NotFound desc = could not find container \"ca5ae8a905340eb85657c87d7b130c7a806244001aa06e398820b1e245756bc9\": container with ID starting with ca5ae8a905340eb85657c87d7b130c7a806244001aa06e398820b1e245756bc9 not found: ID does not exist" Jan 29 17:31:07 crc kubenswrapper[4895]: I0129 17:31:07.049906 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" path="/var/lib/kubelet/pods/5b798c5d-b2c4-457c-855b-926e2a4b7c6c/volumes" Jan 29 17:31:16 crc kubenswrapper[4895]: I0129 17:31:16.037604 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:31:16 crc kubenswrapper[4895]: E0129 17:31:16.038584 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:31:27 crc kubenswrapper[4895]: I0129 17:31:27.048667 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:31:27 crc kubenswrapper[4895]: E0129 17:31:27.051243 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:31:41 crc kubenswrapper[4895]: I0129 17:31:41.038367 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:31:41 crc kubenswrapper[4895]: E0129 17:31:41.040164 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:31:55 crc kubenswrapper[4895]: I0129 17:31:55.037946 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:31:55 crc kubenswrapper[4895]: E0129 17:31:55.038998 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:32:07 crc kubenswrapper[4895]: I0129 17:32:07.051131 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:32:07 crc kubenswrapper[4895]: E0129 17:32:07.052587 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:32:22 crc kubenswrapper[4895]: I0129 17:32:22.038095 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:32:22 crc kubenswrapper[4895]: E0129 17:32:22.039337 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:32:34 crc kubenswrapper[4895]: I0129 17:32:34.036887 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:32:34 crc kubenswrapper[4895]: E0129 17:32:34.037647 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:32:45 crc kubenswrapper[4895]: I0129 17:32:45.036945 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:32:45 crc kubenswrapper[4895]: E0129 17:32:45.037798 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:33:00 crc kubenswrapper[4895]: I0129 17:33:00.037961 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:33:00 crc kubenswrapper[4895]: I0129 17:33:00.703463 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"462fd333d2162e74a8304ae07bd7251cca0d4b222cee0d44c38a3ffb8071cb15"} Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.112221 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6qtcz"] Jan 29 17:34:42 crc kubenswrapper[4895]: E0129 17:34:42.113447 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" containerName="extract-content" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.113473 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" containerName="extract-content" Jan 29 17:34:42 crc kubenswrapper[4895]: E0129 17:34:42.113518 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" containerName="extract-utilities" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.113529 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" containerName="extract-utilities" Jan 29 17:34:42 crc kubenswrapper[4895]: E0129 17:34:42.113553 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" containerName="registry-server" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.113564 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" containerName="registry-server" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.113850 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b798c5d-b2c4-457c-855b-926e2a4b7c6c" containerName="registry-server" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.116046 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.131555 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qtcz"] Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.284644 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-utilities\") pod \"certified-operators-6qtcz\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.284699 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk456\" (UniqueName: \"kubernetes.io/projected/0ad42084-ced1-4cfb-8b3a-0ec350adb475-kube-api-access-xk456\") pod \"certified-operators-6qtcz\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.284902 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-catalog-content\") pod \"certified-operators-6qtcz\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.387015 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-catalog-content\") pod \"certified-operators-6qtcz\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.387115 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-utilities\") pod \"certified-operators-6qtcz\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.387167 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk456\" (UniqueName: \"kubernetes.io/projected/0ad42084-ced1-4cfb-8b3a-0ec350adb475-kube-api-access-xk456\") pod \"certified-operators-6qtcz\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.388004 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-catalog-content\") pod \"certified-operators-6qtcz\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.388066 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-utilities\") pod \"certified-operators-6qtcz\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.410044 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk456\" (UniqueName: \"kubernetes.io/projected/0ad42084-ced1-4cfb-8b3a-0ec350adb475-kube-api-access-xk456\") pod \"certified-operators-6qtcz\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.433692 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:42 crc kubenswrapper[4895]: I0129 17:34:42.934663 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qtcz"] Jan 29 17:34:43 crc kubenswrapper[4895]: I0129 17:34:43.686155 4895 generic.go:334] "Generic (PLEG): container finished" podID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" containerID="f6728497d149bc00d454da80f94a33600b7d9ea2af7d575b68819d6f7662ba32" exitCode=0 Jan 29 17:34:43 crc kubenswrapper[4895]: I0129 17:34:43.686221 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qtcz" event={"ID":"0ad42084-ced1-4cfb-8b3a-0ec350adb475","Type":"ContainerDied","Data":"f6728497d149bc00d454da80f94a33600b7d9ea2af7d575b68819d6f7662ba32"} Jan 29 17:34:43 crc kubenswrapper[4895]: I0129 17:34:43.686470 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qtcz" event={"ID":"0ad42084-ced1-4cfb-8b3a-0ec350adb475","Type":"ContainerStarted","Data":"831347251c44f31ded36e2b666054150132a3bf7e292750ba0b92ffd257ada09"} Jan 29 17:34:44 crc kubenswrapper[4895]: I0129 17:34:44.696117 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qtcz" event={"ID":"0ad42084-ced1-4cfb-8b3a-0ec350adb475","Type":"ContainerStarted","Data":"454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa"} Jan 29 17:34:45 crc kubenswrapper[4895]: I0129 17:34:45.705992 4895 generic.go:334] "Generic (PLEG): container finished" podID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" containerID="454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa" exitCode=0 Jan 29 17:34:45 crc kubenswrapper[4895]: I0129 17:34:45.706044 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qtcz" event={"ID":"0ad42084-ced1-4cfb-8b3a-0ec350adb475","Type":"ContainerDied","Data":"454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa"} Jan 29 17:34:46 crc kubenswrapper[4895]: I0129 17:34:46.724384 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qtcz" event={"ID":"0ad42084-ced1-4cfb-8b3a-0ec350adb475","Type":"ContainerStarted","Data":"98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae"} Jan 29 17:34:46 crc kubenswrapper[4895]: I0129 17:34:46.743551 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6qtcz" podStartSLOduration=2.10642258 podStartE2EDuration="4.743529204s" podCreationTimestamp="2026-01-29 17:34:42 +0000 UTC" firstStartedPulling="2026-01-29 17:34:43.687887099 +0000 UTC m=+4967.490864373" lastFinishedPulling="2026-01-29 17:34:46.324993733 +0000 UTC m=+4970.127970997" observedRunningTime="2026-01-29 17:34:46.741082818 +0000 UTC m=+4970.544060082" watchObservedRunningTime="2026-01-29 17:34:46.743529204 +0000 UTC m=+4970.546506478" Jan 29 17:34:52 crc kubenswrapper[4895]: I0129 17:34:52.434015 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:52 crc kubenswrapper[4895]: I0129 17:34:52.434671 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:52 crc kubenswrapper[4895]: I0129 17:34:52.514669 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:52 crc kubenswrapper[4895]: I0129 17:34:52.828800 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:52 crc kubenswrapper[4895]: I0129 17:34:52.891191 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qtcz"] Jan 29 17:34:54 crc kubenswrapper[4895]: I0129 17:34:54.791941 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6qtcz" podUID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" containerName="registry-server" containerID="cri-o://98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae" gracePeriod=2 Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.778774 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.803719 4895 generic.go:334] "Generic (PLEG): container finished" podID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" containerID="98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae" exitCode=0 Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.803760 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qtcz" event={"ID":"0ad42084-ced1-4cfb-8b3a-0ec350adb475","Type":"ContainerDied","Data":"98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae"} Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.803782 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qtcz" event={"ID":"0ad42084-ced1-4cfb-8b3a-0ec350adb475","Type":"ContainerDied","Data":"831347251c44f31ded36e2b666054150132a3bf7e292750ba0b92ffd257ada09"} Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.803797 4895 scope.go:117] "RemoveContainer" containerID="98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.803852 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qtcz" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.829924 4895 scope.go:117] "RemoveContainer" containerID="454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.849101 4895 scope.go:117] "RemoveContainer" containerID="f6728497d149bc00d454da80f94a33600b7d9ea2af7d575b68819d6f7662ba32" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.865239 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk456\" (UniqueName: \"kubernetes.io/projected/0ad42084-ced1-4cfb-8b3a-0ec350adb475-kube-api-access-xk456\") pod \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.865335 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-catalog-content\") pod \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.865396 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-utilities\") pod \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\" (UID: \"0ad42084-ced1-4cfb-8b3a-0ec350adb475\") " Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.866338 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-utilities" (OuterVolumeSpecName: "utilities") pod "0ad42084-ced1-4cfb-8b3a-0ec350adb475" (UID: "0ad42084-ced1-4cfb-8b3a-0ec350adb475"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.871744 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad42084-ced1-4cfb-8b3a-0ec350adb475-kube-api-access-xk456" (OuterVolumeSpecName: "kube-api-access-xk456") pod "0ad42084-ced1-4cfb-8b3a-0ec350adb475" (UID: "0ad42084-ced1-4cfb-8b3a-0ec350adb475"). InnerVolumeSpecName "kube-api-access-xk456". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.944921 4895 scope.go:117] "RemoveContainer" containerID="98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae" Jan 29 17:34:55 crc kubenswrapper[4895]: E0129 17:34:55.945382 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae\": container with ID starting with 98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae not found: ID does not exist" containerID="98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.945416 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae"} err="failed to get container status \"98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae\": rpc error: code = NotFound desc = could not find container \"98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae\": container with ID starting with 98165af679930afe0b09f71d6eb03db2ec85f091a01e8198c31c8137cc1338ae not found: ID does not exist" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.945436 4895 scope.go:117] "RemoveContainer" containerID="454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa" Jan 29 17:34:55 crc kubenswrapper[4895]: E0129 17:34:55.945700 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa\": container with ID starting with 454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa not found: ID does not exist" containerID="454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.945721 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa"} err="failed to get container status \"454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa\": rpc error: code = NotFound desc = could not find container \"454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa\": container with ID starting with 454a4579765802d5a34e6d9843f1dede338ab079bde7b1fe72128c771ae79faa not found: ID does not exist" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.945737 4895 scope.go:117] "RemoveContainer" containerID="f6728497d149bc00d454da80f94a33600b7d9ea2af7d575b68819d6f7662ba32" Jan 29 17:34:55 crc kubenswrapper[4895]: E0129 17:34:55.946244 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6728497d149bc00d454da80f94a33600b7d9ea2af7d575b68819d6f7662ba32\": container with ID starting with f6728497d149bc00d454da80f94a33600b7d9ea2af7d575b68819d6f7662ba32 not found: ID does not exist" containerID="f6728497d149bc00d454da80f94a33600b7d9ea2af7d575b68819d6f7662ba32" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.946340 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6728497d149bc00d454da80f94a33600b7d9ea2af7d575b68819d6f7662ba32"} err="failed to get container status \"f6728497d149bc00d454da80f94a33600b7d9ea2af7d575b68819d6f7662ba32\": rpc error: code = NotFound desc = could not find container \"f6728497d149bc00d454da80f94a33600b7d9ea2af7d575b68819d6f7662ba32\": container with ID starting with f6728497d149bc00d454da80f94a33600b7d9ea2af7d575b68819d6f7662ba32 not found: ID does not exist" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.968224 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:34:55 crc kubenswrapper[4895]: I0129 17:34:55.968288 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk456\" (UniqueName: \"kubernetes.io/projected/0ad42084-ced1-4cfb-8b3a-0ec350adb475-kube-api-access-xk456\") on node \"crc\" DevicePath \"\"" Jan 29 17:34:56 crc kubenswrapper[4895]: I0129 17:34:56.233168 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ad42084-ced1-4cfb-8b3a-0ec350adb475" (UID: "0ad42084-ced1-4cfb-8b3a-0ec350adb475"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:34:56 crc kubenswrapper[4895]: I0129 17:34:56.274824 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ad42084-ced1-4cfb-8b3a-0ec350adb475-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:34:56 crc kubenswrapper[4895]: I0129 17:34:56.471526 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qtcz"] Jan 29 17:34:56 crc kubenswrapper[4895]: I0129 17:34:56.479653 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6qtcz"] Jan 29 17:34:57 crc kubenswrapper[4895]: I0129 17:34:57.049863 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" path="/var/lib/kubelet/pods/0ad42084-ced1-4cfb-8b3a-0ec350adb475/volumes" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.755132 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jrxfb"] Jan 29 17:35:27 crc kubenswrapper[4895]: E0129 17:35:27.756033 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" containerName="extract-content" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.756044 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" containerName="extract-content" Jan 29 17:35:27 crc kubenswrapper[4895]: E0129 17:35:27.756066 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" containerName="registry-server" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.756072 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" containerName="registry-server" Jan 29 17:35:27 crc kubenswrapper[4895]: E0129 17:35:27.756083 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" containerName="extract-utilities" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.756093 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" containerName="extract-utilities" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.756257 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad42084-ced1-4cfb-8b3a-0ec350adb475" containerName="registry-server" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.758186 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.775335 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrxfb"] Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.817337 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t24q\" (UniqueName: \"kubernetes.io/projected/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-kube-api-access-9t24q\") pod \"redhat-operators-jrxfb\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.817501 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-catalog-content\") pod \"redhat-operators-jrxfb\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.817536 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-utilities\") pod \"redhat-operators-jrxfb\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.823349 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.823409 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.918761 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t24q\" (UniqueName: \"kubernetes.io/projected/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-kube-api-access-9t24q\") pod \"redhat-operators-jrxfb\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.918968 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-catalog-content\") pod \"redhat-operators-jrxfb\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.919020 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-utilities\") pod \"redhat-operators-jrxfb\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.919495 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-utilities\") pod \"redhat-operators-jrxfb\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.919605 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-catalog-content\") pod \"redhat-operators-jrxfb\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:27 crc kubenswrapper[4895]: I0129 17:35:27.948874 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t24q\" (UniqueName: \"kubernetes.io/projected/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-kube-api-access-9t24q\") pod \"redhat-operators-jrxfb\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:28 crc kubenswrapper[4895]: I0129 17:35:28.082017 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:28 crc kubenswrapper[4895]: I0129 17:35:28.614402 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrxfb"] Jan 29 17:35:29 crc kubenswrapper[4895]: I0129 17:35:29.152086 4895 generic.go:334] "Generic (PLEG): container finished" podID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" containerID="ee5ca1ae21a08103a5b05c23a96fd309e8b8929969f9a563e395aa9c1de95759" exitCode=0 Jan 29 17:35:29 crc kubenswrapper[4895]: I0129 17:35:29.152359 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrxfb" event={"ID":"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe","Type":"ContainerDied","Data":"ee5ca1ae21a08103a5b05c23a96fd309e8b8929969f9a563e395aa9c1de95759"} Jan 29 17:35:29 crc kubenswrapper[4895]: I0129 17:35:29.152391 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrxfb" event={"ID":"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe","Type":"ContainerStarted","Data":"346e8b2ef20d61505635f1a890ca3c12e1aa54df891b6f5c81e29f3c6b8c0781"} Jan 29 17:35:31 crc kubenswrapper[4895]: I0129 17:35:31.175321 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrxfb" event={"ID":"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe","Type":"ContainerStarted","Data":"fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8"} Jan 29 17:35:37 crc kubenswrapper[4895]: I0129 17:35:37.229070 4895 generic.go:334] "Generic (PLEG): container finished" podID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" containerID="fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8" exitCode=0 Jan 29 17:35:37 crc kubenswrapper[4895]: I0129 17:35:37.229154 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrxfb" event={"ID":"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe","Type":"ContainerDied","Data":"fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8"} Jan 29 17:35:38 crc kubenswrapper[4895]: I0129 17:35:38.247763 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrxfb" event={"ID":"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe","Type":"ContainerStarted","Data":"472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf"} Jan 29 17:35:38 crc kubenswrapper[4895]: I0129 17:35:38.276347 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jrxfb" podStartSLOduration=2.6597763739999998 podStartE2EDuration="11.276328221s" podCreationTimestamp="2026-01-29 17:35:27 +0000 UTC" firstStartedPulling="2026-01-29 17:35:29.154942212 +0000 UTC m=+5012.957919476" lastFinishedPulling="2026-01-29 17:35:37.771494059 +0000 UTC m=+5021.574471323" observedRunningTime="2026-01-29 17:35:38.268976842 +0000 UTC m=+5022.071954116" watchObservedRunningTime="2026-01-29 17:35:38.276328221 +0000 UTC m=+5022.079305485" Jan 29 17:35:48 crc kubenswrapper[4895]: I0129 17:35:48.082604 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:48 crc kubenswrapper[4895]: I0129 17:35:48.083605 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:48 crc kubenswrapper[4895]: I0129 17:35:48.146989 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:48 crc kubenswrapper[4895]: I0129 17:35:48.381002 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:48 crc kubenswrapper[4895]: I0129 17:35:48.426990 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrxfb"] Jan 29 17:35:50 crc kubenswrapper[4895]: I0129 17:35:50.347011 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jrxfb" podUID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" containerName="registry-server" containerID="cri-o://472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf" gracePeriod=2 Jan 29 17:35:50 crc kubenswrapper[4895]: I0129 17:35:50.911220 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.014654 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-utilities\") pod \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.014758 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t24q\" (UniqueName: \"kubernetes.io/projected/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-kube-api-access-9t24q\") pod \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.014813 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-catalog-content\") pod \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\" (UID: \"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe\") " Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.015817 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-utilities" (OuterVolumeSpecName: "utilities") pod "a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" (UID: "a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.023448 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-kube-api-access-9t24q" (OuterVolumeSpecName: "kube-api-access-9t24q") pod "a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" (UID: "a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe"). InnerVolumeSpecName "kube-api-access-9t24q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.117297 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.117344 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t24q\" (UniqueName: \"kubernetes.io/projected/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-kube-api-access-9t24q\") on node \"crc\" DevicePath \"\"" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.143763 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" (UID: "a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.219088 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.358475 4895 generic.go:334] "Generic (PLEG): container finished" podID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" containerID="472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf" exitCode=0 Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.358551 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrxfb" event={"ID":"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe","Type":"ContainerDied","Data":"472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf"} Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.358598 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrxfb" event={"ID":"a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe","Type":"ContainerDied","Data":"346e8b2ef20d61505635f1a890ca3c12e1aa54df891b6f5c81e29f3c6b8c0781"} Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.358623 4895 scope.go:117] "RemoveContainer" containerID="472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.358859 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrxfb" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.410387 4895 scope.go:117] "RemoveContainer" containerID="fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.422972 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrxfb"] Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.434233 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jrxfb"] Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.438938 4895 scope.go:117] "RemoveContainer" containerID="ee5ca1ae21a08103a5b05c23a96fd309e8b8929969f9a563e395aa9c1de95759" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.477300 4895 scope.go:117] "RemoveContainer" containerID="472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf" Jan 29 17:35:51 crc kubenswrapper[4895]: E0129 17:35:51.478078 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf\": container with ID starting with 472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf not found: ID does not exist" containerID="472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.478119 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf"} err="failed to get container status \"472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf\": rpc error: code = NotFound desc = could not find container \"472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf\": container with ID starting with 472360b668bdf74984a5d171268df108d5b605844c5461a42888bf818008abdf not found: ID does not exist" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.478147 4895 scope.go:117] "RemoveContainer" containerID="fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8" Jan 29 17:35:51 crc kubenswrapper[4895]: E0129 17:35:51.478638 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8\": container with ID starting with fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8 not found: ID does not exist" containerID="fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.478665 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8"} err="failed to get container status \"fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8\": rpc error: code = NotFound desc = could not find container \"fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8\": container with ID starting with fe1e73ba0650730d8ec2f256d8b5ccf4158e37cab20c76a9e6186dd8b44a59c8 not found: ID does not exist" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.478682 4895 scope.go:117] "RemoveContainer" containerID="ee5ca1ae21a08103a5b05c23a96fd309e8b8929969f9a563e395aa9c1de95759" Jan 29 17:35:51 crc kubenswrapper[4895]: E0129 17:35:51.479231 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee5ca1ae21a08103a5b05c23a96fd309e8b8929969f9a563e395aa9c1de95759\": container with ID starting with ee5ca1ae21a08103a5b05c23a96fd309e8b8929969f9a563e395aa9c1de95759 not found: ID does not exist" containerID="ee5ca1ae21a08103a5b05c23a96fd309e8b8929969f9a563e395aa9c1de95759" Jan 29 17:35:51 crc kubenswrapper[4895]: I0129 17:35:51.479257 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5ca1ae21a08103a5b05c23a96fd309e8b8929969f9a563e395aa9c1de95759"} err="failed to get container status \"ee5ca1ae21a08103a5b05c23a96fd309e8b8929969f9a563e395aa9c1de95759\": rpc error: code = NotFound desc = could not find container \"ee5ca1ae21a08103a5b05c23a96fd309e8b8929969f9a563e395aa9c1de95759\": container with ID starting with ee5ca1ae21a08103a5b05c23a96fd309e8b8929969f9a563e395aa9c1de95759 not found: ID does not exist" Jan 29 17:35:53 crc kubenswrapper[4895]: I0129 17:35:53.047549 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" path="/var/lib/kubelet/pods/a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe/volumes" Jan 29 17:35:57 crc kubenswrapper[4895]: I0129 17:35:57.823156 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:35:57 crc kubenswrapper[4895]: I0129 17:35:57.823708 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:36:27 crc kubenswrapper[4895]: I0129 17:36:27.823124 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:36:27 crc kubenswrapper[4895]: I0129 17:36:27.823596 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:36:27 crc kubenswrapper[4895]: I0129 17:36:27.823637 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 17:36:27 crc kubenswrapper[4895]: I0129 17:36:27.824440 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"462fd333d2162e74a8304ae07bd7251cca0d4b222cee0d44c38a3ffb8071cb15"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:36:27 crc kubenswrapper[4895]: I0129 17:36:27.824501 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://462fd333d2162e74a8304ae07bd7251cca0d4b222cee0d44c38a3ffb8071cb15" gracePeriod=600 Jan 29 17:36:28 crc kubenswrapper[4895]: I0129 17:36:28.697439 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="462fd333d2162e74a8304ae07bd7251cca0d4b222cee0d44c38a3ffb8071cb15" exitCode=0 Jan 29 17:36:28 crc kubenswrapper[4895]: I0129 17:36:28.697526 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"462fd333d2162e74a8304ae07bd7251cca0d4b222cee0d44c38a3ffb8071cb15"} Jan 29 17:36:28 crc kubenswrapper[4895]: I0129 17:36:28.698061 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b"} Jan 29 17:36:28 crc kubenswrapper[4895]: I0129 17:36:28.698085 4895 scope.go:117] "RemoveContainer" containerID="1594f87725eaf6579d2b226d4901e2fec1bd1ca0c590a5a10f316a4487161aa3" Jan 29 17:38:57 crc kubenswrapper[4895]: I0129 17:38:57.153029 4895 generic.go:334] "Generic (PLEG): container finished" podID="fa7221ee-55be-4a14-8149-7299f46d1f0d" containerID="6e0dee3c09d878b5f048563b6b9dd16c332cacadb1ab473c8c85e594743043a6" exitCode=1 Jan 29 17:38:57 crc kubenswrapper[4895]: I0129 17:38:57.153080 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa7221ee-55be-4a14-8149-7299f46d1f0d","Type":"ContainerDied","Data":"6e0dee3c09d878b5f048563b6b9dd16c332cacadb1ab473c8c85e594743043a6"} Jan 29 17:38:57 crc kubenswrapper[4895]: I0129 17:38:57.823259 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:38:57 crc kubenswrapper[4895]: I0129 17:38:57.823676 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.608403 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.803042 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ca-certs\") pod \"fa7221ee-55be-4a14-8149-7299f46d1f0d\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.803848 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84m6t\" (UniqueName: \"kubernetes.io/projected/fa7221ee-55be-4a14-8149-7299f46d1f0d-kube-api-access-84m6t\") pod \"fa7221ee-55be-4a14-8149-7299f46d1f0d\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.804007 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-temporary\") pod \"fa7221ee-55be-4a14-8149-7299f46d1f0d\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.804562 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "fa7221ee-55be-4a14-8149-7299f46d1f0d" (UID: "fa7221ee-55be-4a14-8149-7299f46d1f0d"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.804115 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config\") pod \"fa7221ee-55be-4a14-8149-7299f46d1f0d\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.804701 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config-secret\") pod \"fa7221ee-55be-4a14-8149-7299f46d1f0d\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.804736 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-workdir\") pod \"fa7221ee-55be-4a14-8149-7299f46d1f0d\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.804776 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"fa7221ee-55be-4a14-8149-7299f46d1f0d\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.804825 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-config-data\") pod \"fa7221ee-55be-4a14-8149-7299f46d1f0d\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.804916 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ssh-key\") pod \"fa7221ee-55be-4a14-8149-7299f46d1f0d\" (UID: \"fa7221ee-55be-4a14-8149-7299f46d1f0d\") " Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.805684 4895 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.807136 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-config-data" (OuterVolumeSpecName: "config-data") pod "fa7221ee-55be-4a14-8149-7299f46d1f0d" (UID: "fa7221ee-55be-4a14-8149-7299f46d1f0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.812164 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "fa7221ee-55be-4a14-8149-7299f46d1f0d" (UID: "fa7221ee-55be-4a14-8149-7299f46d1f0d"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.815730 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "fa7221ee-55be-4a14-8149-7299f46d1f0d" (UID: "fa7221ee-55be-4a14-8149-7299f46d1f0d"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.826630 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa7221ee-55be-4a14-8149-7299f46d1f0d-kube-api-access-84m6t" (OuterVolumeSpecName: "kube-api-access-84m6t") pod "fa7221ee-55be-4a14-8149-7299f46d1f0d" (UID: "fa7221ee-55be-4a14-8149-7299f46d1f0d"). InnerVolumeSpecName "kube-api-access-84m6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.839926 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fa7221ee-55be-4a14-8149-7299f46d1f0d" (UID: "fa7221ee-55be-4a14-8149-7299f46d1f0d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.840482 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fa7221ee-55be-4a14-8149-7299f46d1f0d" (UID: "fa7221ee-55be-4a14-8149-7299f46d1f0d"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.864449 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fa7221ee-55be-4a14-8149-7299f46d1f0d" (UID: "fa7221ee-55be-4a14-8149-7299f46d1f0d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.880045 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fa7221ee-55be-4a14-8149-7299f46d1f0d" (UID: "fa7221ee-55be-4a14-8149-7299f46d1f0d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.907410 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.907447 4895 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.907458 4895 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fa7221ee-55be-4a14-8149-7299f46d1f0d-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.907487 4895 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.907499 4895 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa7221ee-55be-4a14-8149-7299f46d1f0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.907508 4895 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.907517 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84m6t\" (UniqueName: \"kubernetes.io/projected/fa7221ee-55be-4a14-8149-7299f46d1f0d-kube-api-access-84m6t\") on node \"crc\" DevicePath \"\"" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.907526 4895 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fa7221ee-55be-4a14-8149-7299f46d1f0d-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:38:58 crc kubenswrapper[4895]: I0129 17:38:58.927966 4895 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 29 17:38:59 crc kubenswrapper[4895]: I0129 17:38:59.009547 4895 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:38:59 crc kubenswrapper[4895]: I0129 17:38:59.175067 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fa7221ee-55be-4a14-8149-7299f46d1f0d","Type":"ContainerDied","Data":"4be67b90928c44a63c52a3335a770229cd4ff7533b0c39aa085725582271fc2f"} Jan 29 17:38:59 crc kubenswrapper[4895]: I0129 17:38:59.175592 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4be67b90928c44a63c52a3335a770229cd4ff7533b0c39aa085725582271fc2f" Jan 29 17:38:59 crc kubenswrapper[4895]: I0129 17:38:59.175369 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.458358 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 17:39:04 crc kubenswrapper[4895]: E0129 17:39:04.460488 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" containerName="extract-content" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.460520 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" containerName="extract-content" Jan 29 17:39:04 crc kubenswrapper[4895]: E0129 17:39:04.460555 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa7221ee-55be-4a14-8149-7299f46d1f0d" containerName="tempest-tests-tempest-tests-runner" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.460570 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa7221ee-55be-4a14-8149-7299f46d1f0d" containerName="tempest-tests-tempest-tests-runner" Jan 29 17:39:04 crc kubenswrapper[4895]: E0129 17:39:04.460605 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" containerName="extract-utilities" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.460618 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" containerName="extract-utilities" Jan 29 17:39:04 crc kubenswrapper[4895]: E0129 17:39:04.460683 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" containerName="registry-server" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.460699 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" containerName="registry-server" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.461140 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f848c5-bc01-44bd-b2f7-a5fe6b45ecfe" containerName="registry-server" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.461205 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa7221ee-55be-4a14-8149-7299f46d1f0d" containerName="tempest-tests-tempest-tests-runner" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.462667 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.468666 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-9tlm5" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.472612 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.639409 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bd9cf013-3c56-469b-8321-8937d5919276\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.639550 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzzsj\" (UniqueName: \"kubernetes.io/projected/bd9cf013-3c56-469b-8321-8937d5919276-kube-api-access-kzzsj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bd9cf013-3c56-469b-8321-8937d5919276\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.743258 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bd9cf013-3c56-469b-8321-8937d5919276\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.743683 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzzsj\" (UniqueName: \"kubernetes.io/projected/bd9cf013-3c56-469b-8321-8937d5919276-kube-api-access-kzzsj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bd9cf013-3c56-469b-8321-8937d5919276\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.744135 4895 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bd9cf013-3c56-469b-8321-8937d5919276\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.767653 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzzsj\" (UniqueName: \"kubernetes.io/projected/bd9cf013-3c56-469b-8321-8937d5919276-kube-api-access-kzzsj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bd9cf013-3c56-469b-8321-8937d5919276\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.776620 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bd9cf013-3c56-469b-8321-8937d5919276\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 17:39:04 crc kubenswrapper[4895]: I0129 17:39:04.783512 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 17:39:05 crc kubenswrapper[4895]: I0129 17:39:05.265419 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:39:05 crc kubenswrapper[4895]: I0129 17:39:05.265503 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 17:39:06 crc kubenswrapper[4895]: I0129 17:39:06.266363 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"bd9cf013-3c56-469b-8321-8937d5919276","Type":"ContainerStarted","Data":"bdc290dbe9f718cca53b6bf6a2df0591a23ca74ffea24919eb6d0d1f07ac51a6"} Jan 29 17:39:07 crc kubenswrapper[4895]: I0129 17:39:07.275721 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"bd9cf013-3c56-469b-8321-8937d5919276","Type":"ContainerStarted","Data":"9abbc37c5e5086a70ebb2be3c4540a5f38164cf93b7fc85512d1e58b72eead73"} Jan 29 17:39:07 crc kubenswrapper[4895]: I0129 17:39:07.294595 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.281163701 podStartE2EDuration="3.294574107s" podCreationTimestamp="2026-01-29 17:39:04 +0000 UTC" firstStartedPulling="2026-01-29 17:39:05.265080893 +0000 UTC m=+5229.068058167" lastFinishedPulling="2026-01-29 17:39:06.278491269 +0000 UTC m=+5230.081468573" observedRunningTime="2026-01-29 17:39:07.285738507 +0000 UTC m=+5231.088715771" watchObservedRunningTime="2026-01-29 17:39:07.294574107 +0000 UTC m=+5231.097551371" Jan 29 17:39:27 crc kubenswrapper[4895]: I0129 17:39:27.823119 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:39:27 crc kubenswrapper[4895]: I0129 17:39:27.823561 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:39:56 crc kubenswrapper[4895]: I0129 17:39:56.791097 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kzbmj/must-gather-j9jb7"] Jan 29 17:39:56 crc kubenswrapper[4895]: I0129 17:39:56.793003 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kzbmj/must-gather-j9jb7" Jan 29 17:39:56 crc kubenswrapper[4895]: I0129 17:39:56.801189 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kzbmj"/"default-dockercfg-7ldzz" Jan 29 17:39:56 crc kubenswrapper[4895]: I0129 17:39:56.801253 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kzbmj"/"openshift-service-ca.crt" Jan 29 17:39:56 crc kubenswrapper[4895]: I0129 17:39:56.801679 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kzbmj"/"kube-root-ca.crt" Jan 29 17:39:56 crc kubenswrapper[4895]: I0129 17:39:56.806820 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kzbmj/must-gather-j9jb7"] Jan 29 17:39:56 crc kubenswrapper[4895]: I0129 17:39:56.889858 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64nf4\" (UniqueName: \"kubernetes.io/projected/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-kube-api-access-64nf4\") pod \"must-gather-j9jb7\" (UID: \"eaa16f29-f14b-4ce4-bf46-2660a43de5fd\") " pod="openshift-must-gather-kzbmj/must-gather-j9jb7" Jan 29 17:39:56 crc kubenswrapper[4895]: I0129 17:39:56.889935 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-must-gather-output\") pod \"must-gather-j9jb7\" (UID: \"eaa16f29-f14b-4ce4-bf46-2660a43de5fd\") " pod="openshift-must-gather-kzbmj/must-gather-j9jb7" Jan 29 17:39:56 crc kubenswrapper[4895]: I0129 17:39:56.991322 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64nf4\" (UniqueName: \"kubernetes.io/projected/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-kube-api-access-64nf4\") pod \"must-gather-j9jb7\" (UID: \"eaa16f29-f14b-4ce4-bf46-2660a43de5fd\") " pod="openshift-must-gather-kzbmj/must-gather-j9jb7" Jan 29 17:39:56 crc kubenswrapper[4895]: I0129 17:39:56.991614 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-must-gather-output\") pod \"must-gather-j9jb7\" (UID: \"eaa16f29-f14b-4ce4-bf46-2660a43de5fd\") " pod="openshift-must-gather-kzbmj/must-gather-j9jb7" Jan 29 17:39:56 crc kubenswrapper[4895]: I0129 17:39:56.992087 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-must-gather-output\") pod \"must-gather-j9jb7\" (UID: \"eaa16f29-f14b-4ce4-bf46-2660a43de5fd\") " pod="openshift-must-gather-kzbmj/must-gather-j9jb7" Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.007183 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kzbmj"/"kube-root-ca.crt" Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.018392 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kzbmj"/"openshift-service-ca.crt" Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.037736 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64nf4\" (UniqueName: \"kubernetes.io/projected/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-kube-api-access-64nf4\") pod \"must-gather-j9jb7\" (UID: \"eaa16f29-f14b-4ce4-bf46-2660a43de5fd\") " pod="openshift-must-gather-kzbmj/must-gather-j9jb7" Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.109960 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kzbmj"/"default-dockercfg-7ldzz" Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.118649 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kzbmj/must-gather-j9jb7" Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.569660 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kzbmj/must-gather-j9jb7"] Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.823486 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.823772 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.823813 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.824466 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.824517 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" gracePeriod=600 Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.948692 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kzbmj/must-gather-j9jb7" event={"ID":"eaa16f29-f14b-4ce4-bf46-2660a43de5fd","Type":"ContainerStarted","Data":"bf9c36d2c36fae53340247f480e6a3a891dd5cebe2169fa34069ee1fe8fa6950"} Jan 29 17:39:57 crc kubenswrapper[4895]: E0129 17:39:57.951796 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.952074 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" exitCode=0 Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.952106 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b"} Jan 29 17:39:57 crc kubenswrapper[4895]: I0129 17:39:57.952132 4895 scope.go:117] "RemoveContainer" containerID="462fd333d2162e74a8304ae07bd7251cca0d4b222cee0d44c38a3ffb8071cb15" Jan 29 17:39:58 crc kubenswrapper[4895]: I0129 17:39:58.963388 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:39:58 crc kubenswrapper[4895]: E0129 17:39:58.963859 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:40:04 crc kubenswrapper[4895]: I0129 17:40:04.003945 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kzbmj/must-gather-j9jb7" event={"ID":"eaa16f29-f14b-4ce4-bf46-2660a43de5fd","Type":"ContainerStarted","Data":"3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec"} Jan 29 17:40:04 crc kubenswrapper[4895]: I0129 17:40:04.004543 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kzbmj/must-gather-j9jb7" event={"ID":"eaa16f29-f14b-4ce4-bf46-2660a43de5fd","Type":"ContainerStarted","Data":"c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191"} Jan 29 17:40:04 crc kubenswrapper[4895]: I0129 17:40:04.031898 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kzbmj/must-gather-j9jb7" podStartSLOduration=2.225607744 podStartE2EDuration="8.03185973s" podCreationTimestamp="2026-01-29 17:39:56 +0000 UTC" firstStartedPulling="2026-01-29 17:39:57.570861096 +0000 UTC m=+5281.373838370" lastFinishedPulling="2026-01-29 17:40:03.377113082 +0000 UTC m=+5287.180090356" observedRunningTime="2026-01-29 17:40:04.022442125 +0000 UTC m=+5287.825419429" watchObservedRunningTime="2026-01-29 17:40:04.03185973 +0000 UTC m=+5287.834837004" Jan 29 17:40:08 crc kubenswrapper[4895]: I0129 17:40:08.036464 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kzbmj/crc-debug-dkmf8"] Jan 29 17:40:08 crc kubenswrapper[4895]: I0129 17:40:08.039300 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" Jan 29 17:40:08 crc kubenswrapper[4895]: I0129 17:40:08.214305 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zf46\" (UniqueName: \"kubernetes.io/projected/2e491b93-a382-4443-8a4d-bb605043a422-kube-api-access-4zf46\") pod \"crc-debug-dkmf8\" (UID: \"2e491b93-a382-4443-8a4d-bb605043a422\") " pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" Jan 29 17:40:08 crc kubenswrapper[4895]: I0129 17:40:08.214519 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e491b93-a382-4443-8a4d-bb605043a422-host\") pod \"crc-debug-dkmf8\" (UID: \"2e491b93-a382-4443-8a4d-bb605043a422\") " pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" Jan 29 17:40:08 crc kubenswrapper[4895]: I0129 17:40:08.316340 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e491b93-a382-4443-8a4d-bb605043a422-host\") pod \"crc-debug-dkmf8\" (UID: \"2e491b93-a382-4443-8a4d-bb605043a422\") " pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" Jan 29 17:40:08 crc kubenswrapper[4895]: I0129 17:40:08.316433 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zf46\" (UniqueName: \"kubernetes.io/projected/2e491b93-a382-4443-8a4d-bb605043a422-kube-api-access-4zf46\") pod \"crc-debug-dkmf8\" (UID: \"2e491b93-a382-4443-8a4d-bb605043a422\") " pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" Jan 29 17:40:08 crc kubenswrapper[4895]: I0129 17:40:08.316772 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e491b93-a382-4443-8a4d-bb605043a422-host\") pod \"crc-debug-dkmf8\" (UID: \"2e491b93-a382-4443-8a4d-bb605043a422\") " pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" Jan 29 17:40:08 crc kubenswrapper[4895]: I0129 17:40:08.341314 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zf46\" (UniqueName: \"kubernetes.io/projected/2e491b93-a382-4443-8a4d-bb605043a422-kube-api-access-4zf46\") pod \"crc-debug-dkmf8\" (UID: \"2e491b93-a382-4443-8a4d-bb605043a422\") " pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" Jan 29 17:40:08 crc kubenswrapper[4895]: I0129 17:40:08.358806 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" Jan 29 17:40:09 crc kubenswrapper[4895]: I0129 17:40:09.050066 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" event={"ID":"2e491b93-a382-4443-8a4d-bb605043a422","Type":"ContainerStarted","Data":"fbfe69d8fb052f56009ad201d9357470ed9ff8068b139f373fce9f8666c7da73"} Jan 29 17:40:10 crc kubenswrapper[4895]: I0129 17:40:10.036641 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:40:10 crc kubenswrapper[4895]: E0129 17:40:10.036902 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:40:18 crc kubenswrapper[4895]: I0129 17:40:18.124606 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" event={"ID":"2e491b93-a382-4443-8a4d-bb605043a422","Type":"ContainerStarted","Data":"f524c4e808dab966a2eb5f82a14dfd6064d763d53668c3660ef8f379e8bbde5e"} Jan 29 17:40:18 crc kubenswrapper[4895]: I0129 17:40:18.137354 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" podStartSLOduration=1.381963086 podStartE2EDuration="10.137334849s" podCreationTimestamp="2026-01-29 17:40:08 +0000 UTC" firstStartedPulling="2026-01-29 17:40:08.392554451 +0000 UTC m=+5292.195531715" lastFinishedPulling="2026-01-29 17:40:17.147926214 +0000 UTC m=+5300.950903478" observedRunningTime="2026-01-29 17:40:18.137257027 +0000 UTC m=+5301.940234281" watchObservedRunningTime="2026-01-29 17:40:18.137334849 +0000 UTC m=+5301.940312113" Jan 29 17:40:19 crc kubenswrapper[4895]: I0129 17:40:19.134017 4895 generic.go:334] "Generic (PLEG): container finished" podID="2e491b93-a382-4443-8a4d-bb605043a422" containerID="f524c4e808dab966a2eb5f82a14dfd6064d763d53668c3660ef8f379e8bbde5e" exitCode=125 Jan 29 17:40:19 crc kubenswrapper[4895]: I0129 17:40:19.134226 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" event={"ID":"2e491b93-a382-4443-8a4d-bb605043a422","Type":"ContainerDied","Data":"f524c4e808dab966a2eb5f82a14dfd6064d763d53668c3660ef8f379e8bbde5e"} Jan 29 17:40:20 crc kubenswrapper[4895]: I0129 17:40:20.248288 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" Jan 29 17:40:20 crc kubenswrapper[4895]: I0129 17:40:20.284820 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kzbmj/crc-debug-dkmf8"] Jan 29 17:40:20 crc kubenswrapper[4895]: I0129 17:40:20.292323 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kzbmj/crc-debug-dkmf8"] Jan 29 17:40:20 crc kubenswrapper[4895]: I0129 17:40:20.348103 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zf46\" (UniqueName: \"kubernetes.io/projected/2e491b93-a382-4443-8a4d-bb605043a422-kube-api-access-4zf46\") pod \"2e491b93-a382-4443-8a4d-bb605043a422\" (UID: \"2e491b93-a382-4443-8a4d-bb605043a422\") " Jan 29 17:40:20 crc kubenswrapper[4895]: I0129 17:40:20.348321 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e491b93-a382-4443-8a4d-bb605043a422-host\") pod \"2e491b93-a382-4443-8a4d-bb605043a422\" (UID: \"2e491b93-a382-4443-8a4d-bb605043a422\") " Jan 29 17:40:20 crc kubenswrapper[4895]: I0129 17:40:20.348488 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e491b93-a382-4443-8a4d-bb605043a422-host" (OuterVolumeSpecName: "host") pod "2e491b93-a382-4443-8a4d-bb605043a422" (UID: "2e491b93-a382-4443-8a4d-bb605043a422"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:40:20 crc kubenswrapper[4895]: I0129 17:40:20.348838 4895 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e491b93-a382-4443-8a4d-bb605043a422-host\") on node \"crc\" DevicePath \"\"" Jan 29 17:40:20 crc kubenswrapper[4895]: I0129 17:40:20.354770 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e491b93-a382-4443-8a4d-bb605043a422-kube-api-access-4zf46" (OuterVolumeSpecName: "kube-api-access-4zf46") pod "2e491b93-a382-4443-8a4d-bb605043a422" (UID: "2e491b93-a382-4443-8a4d-bb605043a422"). InnerVolumeSpecName "kube-api-access-4zf46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:40:20 crc kubenswrapper[4895]: I0129 17:40:20.451124 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zf46\" (UniqueName: \"kubernetes.io/projected/2e491b93-a382-4443-8a4d-bb605043a422-kube-api-access-4zf46\") on node \"crc\" DevicePath \"\"" Jan 29 17:40:21 crc kubenswrapper[4895]: I0129 17:40:21.062880 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e491b93-a382-4443-8a4d-bb605043a422" path="/var/lib/kubelet/pods/2e491b93-a382-4443-8a4d-bb605043a422/volumes" Jan 29 17:40:21 crc kubenswrapper[4895]: I0129 17:40:21.151581 4895 scope.go:117] "RemoveContainer" containerID="f524c4e808dab966a2eb5f82a14dfd6064d763d53668c3660ef8f379e8bbde5e" Jan 29 17:40:21 crc kubenswrapper[4895]: I0129 17:40:21.151699 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kzbmj/crc-debug-dkmf8" Jan 29 17:40:25 crc kubenswrapper[4895]: I0129 17:40:25.042357 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:40:25 crc kubenswrapper[4895]: E0129 17:40:25.048794 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:40:25 crc kubenswrapper[4895]: I0129 17:40:25.957218 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgndp"] Jan 29 17:40:25 crc kubenswrapper[4895]: E0129 17:40:25.960666 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e491b93-a382-4443-8a4d-bb605043a422" containerName="container-00" Jan 29 17:40:25 crc kubenswrapper[4895]: I0129 17:40:25.960711 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e491b93-a382-4443-8a4d-bb605043a422" containerName="container-00" Jan 29 17:40:25 crc kubenswrapper[4895]: I0129 17:40:25.960986 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e491b93-a382-4443-8a4d-bb605043a422" containerName="container-00" Jan 29 17:40:25 crc kubenswrapper[4895]: I0129 17:40:25.962620 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:40:25 crc kubenswrapper[4895]: I0129 17:40:25.980014 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgndp"] Jan 29 17:40:26 crc kubenswrapper[4895]: I0129 17:40:26.063701 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2862q\" (UniqueName: \"kubernetes.io/projected/ff147090-376c-429b-a465-a41d2772de00-kube-api-access-2862q\") pod \"redhat-marketplace-rgndp\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:40:26 crc kubenswrapper[4895]: I0129 17:40:26.063822 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-utilities\") pod \"redhat-marketplace-rgndp\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:40:26 crc kubenswrapper[4895]: I0129 17:40:26.063850 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-catalog-content\") pod \"redhat-marketplace-rgndp\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:40:26 crc kubenswrapper[4895]: I0129 17:40:26.165217 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-utilities\") pod \"redhat-marketplace-rgndp\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:40:26 crc kubenswrapper[4895]: I0129 17:40:26.165542 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-catalog-content\") pod \"redhat-marketplace-rgndp\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:40:26 crc kubenswrapper[4895]: I0129 17:40:26.165652 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2862q\" (UniqueName: \"kubernetes.io/projected/ff147090-376c-429b-a465-a41d2772de00-kube-api-access-2862q\") pod \"redhat-marketplace-rgndp\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:40:26 crc kubenswrapper[4895]: I0129 17:40:26.166126 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-catalog-content\") pod \"redhat-marketplace-rgndp\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:40:26 crc kubenswrapper[4895]: I0129 17:40:26.166298 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-utilities\") pod \"redhat-marketplace-rgndp\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:40:26 crc kubenswrapper[4895]: I0129 17:40:26.190460 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2862q\" (UniqueName: \"kubernetes.io/projected/ff147090-376c-429b-a465-a41d2772de00-kube-api-access-2862q\") pod \"redhat-marketplace-rgndp\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:40:26 crc kubenswrapper[4895]: I0129 17:40:26.303488 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:40:26 crc kubenswrapper[4895]: I0129 17:40:26.778114 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgndp"] Jan 29 17:40:27 crc kubenswrapper[4895]: I0129 17:40:27.208899 4895 generic.go:334] "Generic (PLEG): container finished" podID="ff147090-376c-429b-a465-a41d2772de00" containerID="6e5b392bf23bb0d8aee0bd0ce42d3e2bb758c3f5af81dd767233575144988ddf" exitCode=0 Jan 29 17:40:27 crc kubenswrapper[4895]: I0129 17:40:27.208945 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgndp" event={"ID":"ff147090-376c-429b-a465-a41d2772de00","Type":"ContainerDied","Data":"6e5b392bf23bb0d8aee0bd0ce42d3e2bb758c3f5af81dd767233575144988ddf"} Jan 29 17:40:27 crc kubenswrapper[4895]: I0129 17:40:27.208992 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgndp" event={"ID":"ff147090-376c-429b-a465-a41d2772de00","Type":"ContainerStarted","Data":"b5ee68f8f5200f80b5abae19bb1acf72c27392f474ac5d4bd7d996017b643acb"} Jan 29 17:40:27 crc kubenswrapper[4895]: E0129 17:40:27.330553 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:40:27 crc kubenswrapper[4895]: E0129 17:40:27.330697 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2862q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rgndp_openshift-marketplace(ff147090-376c-429b-a465-a41d2772de00): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:40:27 crc kubenswrapper[4895]: E0129 17:40:27.331851 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:40:28 crc kubenswrapper[4895]: E0129 17:40:28.220762 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:40:39 crc kubenswrapper[4895]: I0129 17:40:39.037122 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:40:39 crc kubenswrapper[4895]: E0129 17:40:39.037952 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:40:41 crc kubenswrapper[4895]: E0129 17:40:41.168637 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:40:41 crc kubenswrapper[4895]: E0129 17:40:41.169268 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2862q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rgndp_openshift-marketplace(ff147090-376c-429b-a465-a41d2772de00): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:40:41 crc kubenswrapper[4895]: E0129 17:40:41.170530 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:40:51 crc kubenswrapper[4895]: I0129 17:40:51.036500 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:40:51 crc kubenswrapper[4895]: E0129 17:40:51.037273 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:40:55 crc kubenswrapper[4895]: E0129 17:40:55.039464 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.093277 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vzzds"] Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.137561 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vzzds"] Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.137715 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.245673 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-utilities\") pod \"community-operators-vzzds\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.245887 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-catalog-content\") pod \"community-operators-vzzds\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.245938 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5mf\" (UniqueName: \"kubernetes.io/projected/4c4f588f-f4f6-4000-9919-8461a6af64a3-kube-api-access-qm5mf\") pod \"community-operators-vzzds\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.347993 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-catalog-content\") pod \"community-operators-vzzds\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.348484 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5mf\" (UniqueName: \"kubernetes.io/projected/4c4f588f-f4f6-4000-9919-8461a6af64a3-kube-api-access-qm5mf\") pod \"community-operators-vzzds\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.348427 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-catalog-content\") pod \"community-operators-vzzds\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.348925 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-utilities\") pod \"community-operators-vzzds\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.349312 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-utilities\") pod \"community-operators-vzzds\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.368090 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5mf\" (UniqueName: \"kubernetes.io/projected/4c4f588f-f4f6-4000-9919-8461a6af64a3-kube-api-access-qm5mf\") pod \"community-operators-vzzds\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:41:05 crc kubenswrapper[4895]: I0129 17:41:05.479965 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:41:06 crc kubenswrapper[4895]: I0129 17:41:06.001116 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vzzds"] Jan 29 17:41:06 crc kubenswrapper[4895]: I0129 17:41:06.038740 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:41:06 crc kubenswrapper[4895]: E0129 17:41:06.039096 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:41:06 crc kubenswrapper[4895]: I0129 17:41:06.598304 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzzds" event={"ID":"4c4f588f-f4f6-4000-9919-8461a6af64a3","Type":"ContainerDied","Data":"86beb07b61a85f6fbd4d24a6e019b0ac73a59e7b477e6aeb7bfe10fc57ba0ae7"} Jan 29 17:41:06 crc kubenswrapper[4895]: I0129 17:41:06.599470 4895 generic.go:334] "Generic (PLEG): container finished" podID="4c4f588f-f4f6-4000-9919-8461a6af64a3" containerID="86beb07b61a85f6fbd4d24a6e019b0ac73a59e7b477e6aeb7bfe10fc57ba0ae7" exitCode=0 Jan 29 17:41:06 crc kubenswrapper[4895]: I0129 17:41:06.599564 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzzds" event={"ID":"4c4f588f-f4f6-4000-9919-8461a6af64a3","Type":"ContainerStarted","Data":"c1089ae412b9d709af5872ad316adb99df27d095bf8738b13301a8e804435905"} Jan 29 17:41:06 crc kubenswrapper[4895]: E0129 17:41:06.732434 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:41:06 crc kubenswrapper[4895]: E0129 17:41:06.732577 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm5mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vzzds_openshift-marketplace(4c4f588f-f4f6-4000-9919-8461a6af64a3): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:41:06 crc kubenswrapper[4895]: E0129 17:41:06.733778 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:41:07 crc kubenswrapper[4895]: E0129 17:41:07.612996 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:41:09 crc kubenswrapper[4895]: E0129 17:41:09.184410 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:41:09 crc kubenswrapper[4895]: E0129 17:41:09.185124 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2862q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rgndp_openshift-marketplace(ff147090-376c-429b-a465-a41d2772de00): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:41:09 crc kubenswrapper[4895]: E0129 17:41:09.186543 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:41:21 crc kubenswrapper[4895]: I0129 17:41:21.036570 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:41:21 crc kubenswrapper[4895]: E0129 17:41:21.037439 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:41:22 crc kubenswrapper[4895]: E0129 17:41:22.039216 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:41:22 crc kubenswrapper[4895]: E0129 17:41:22.156041 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:41:22 crc kubenswrapper[4895]: E0129 17:41:22.156187 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm5mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vzzds_openshift-marketplace(4c4f588f-f4f6-4000-9919-8461a6af64a3): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:41:22 crc kubenswrapper[4895]: E0129 17:41:22.157660 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:41:32 crc kubenswrapper[4895]: I0129 17:41:32.036941 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:41:32 crc kubenswrapper[4895]: E0129 17:41:32.037833 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:41:34 crc kubenswrapper[4895]: E0129 17:41:34.042377 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:41:35 crc kubenswrapper[4895]: E0129 17:41:35.038625 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:41:42 crc kubenswrapper[4895]: I0129 17:41:42.868784 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-68f56c8b56-xwgv2_02fc8cd9-5a26-4ca0-9a6b-f70458ed2977/barbican-api/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.088505 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-556f55978-6k8pl_fc68545b-8e7a-4b48-86f1-86b5e188672d/barbican-keystone-listener/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.145768 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-68f56c8b56-xwgv2_02fc8cd9-5a26-4ca0-9a6b-f70458ed2977/barbican-api-log/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.285320 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-556f55978-6k8pl_fc68545b-8e7a-4b48-86f1-86b5e188672d/barbican-keystone-listener-log/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.291393 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b5b766675-prdvb_4684685c-78bd-4773-ba6d-7e663bb1ea19/barbican-worker/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.379816 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b5b766675-prdvb_4684685c-78bd-4773-ba6d-7e663bb1ea19/barbican-worker-log/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.513820 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-lrlbt_3be5cec1-ef17-4899-a276-f6f7b3cdb9f5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.581392 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ad356ea5-8184-46f6-b58d-399b0a742239/ceilometer-central-agent/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.702170 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ad356ea5-8184-46f6-b58d-399b0a742239/ceilometer-notification-agent/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.715642 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ad356ea5-8184-46f6-b58d-399b0a742239/proxy-httpd/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.744296 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ad356ea5-8184-46f6-b58d-399b0a742239/sg-core/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.879948 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-slstk_f6e8674c-e754-4883-a39e-a77c2ae8cf02/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:43 crc kubenswrapper[4895]: I0129 17:41:43.927275 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rhq5d_dbd90aa1-d72b-4ea6-a4c2-7f3526b0a394/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:44 crc kubenswrapper[4895]: I0129 17:41:44.037363 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:41:44 crc kubenswrapper[4895]: E0129 17:41:44.037714 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:41:44 crc kubenswrapper[4895]: I0129 17:41:44.442598 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_fd7b84a0-a0ab-40b1-802f-3ec3279e1712/probe/0.log" Jan 29 17:41:44 crc kubenswrapper[4895]: I0129 17:41:44.648208 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_761f973d-98f3-4972-ab4d-60398028e804/cinder-api/0.log" Jan 29 17:41:44 crc kubenswrapper[4895]: I0129 17:41:44.872221 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cf6f1e04-8de8-41c4-816a-b2293ca9886e/cinder-scheduler/0.log" Jan 29 17:41:44 crc kubenswrapper[4895]: I0129 17:41:44.897170 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_761f973d-98f3-4972-ab4d-60398028e804/cinder-api-log/0.log" Jan 29 17:41:44 crc kubenswrapper[4895]: I0129 17:41:44.941719 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cf6f1e04-8de8-41c4-816a-b2293ca9886e/probe/0.log" Jan 29 17:41:45 crc kubenswrapper[4895]: I0129 17:41:45.340705 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_e241f959-4c63-420a-9ab9-988ce0f2a46a/probe/0.log" Jan 29 17:41:45 crc kubenswrapper[4895]: I0129 17:41:45.597250 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-6htpt_43df5196-f55f-497d-bf95-35b7b2b40a46/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:46 crc kubenswrapper[4895]: I0129 17:41:46.232556 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jkl7t_8b5bbe74-3ed2-4061-bc48-cd76433873da/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:46 crc kubenswrapper[4895]: I0129 17:41:46.467245 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-9bxf2_892c18fa-4c09-46ac-aa0e-42f0466f4b5c/init/0.log" Jan 29 17:41:46 crc kubenswrapper[4895]: I0129 17:41:46.699929 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-9bxf2_892c18fa-4c09-46ac-aa0e-42f0466f4b5c/init/0.log" Jan 29 17:41:46 crc kubenswrapper[4895]: I0129 17:41:46.793154 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_fd7b84a0-a0ab-40b1-802f-3ec3279e1712/cinder-backup/0.log" Jan 29 17:41:46 crc kubenswrapper[4895]: I0129 17:41:46.802516 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-9bxf2_892c18fa-4c09-46ac-aa0e-42f0466f4b5c/dnsmasq-dns/0.log" Jan 29 17:41:46 crc kubenswrapper[4895]: I0129 17:41:46.936892 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3ef22134-e2b5-45b3-87bb-1b061d3834d2/glance-httpd/0.log" Jan 29 17:41:47 crc kubenswrapper[4895]: I0129 17:41:47.047140 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3ef22134-e2b5-45b3-87bb-1b061d3834d2/glance-log/0.log" Jan 29 17:41:47 crc kubenswrapper[4895]: I0129 17:41:47.145317 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d710098a-e10c-427b-8bdb-bb9cfad0376d/glance-log/0.log" Jan 29 17:41:47 crc kubenswrapper[4895]: I0129 17:41:47.192655 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d710098a-e10c-427b-8bdb-bb9cfad0376d/glance-httpd/0.log" Jan 29 17:41:47 crc kubenswrapper[4895]: I0129 17:41:47.365145 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bb5cc9d-przr2_7823fb45-6935-459a-a1c9-7723a2f52136/horizon/0.log" Jan 29 17:41:47 crc kubenswrapper[4895]: I0129 17:41:47.657860 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bb5cc9d-przr2_7823fb45-6935-459a-a1c9-7723a2f52136/horizon-log/0.log" Jan 29 17:41:48 crc kubenswrapper[4895]: I0129 17:41:48.043419 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dvqdb_37a09037-a0fd-4fa0-94de-a819953a38a1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:48 crc kubenswrapper[4895]: I0129 17:41:48.087965 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-52ns4_b749d017-f562-4021-9a9d-474569a1400e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:48 crc kubenswrapper[4895]: E0129 17:41:48.168223 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:41:48 crc kubenswrapper[4895]: E0129 17:41:48.168601 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm5mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vzzds_openshift-marketplace(4c4f588f-f4f6-4000-9919-8461a6af64a3): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:41:48 crc kubenswrapper[4895]: E0129 17:41:48.170025 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:41:48 crc kubenswrapper[4895]: I0129 17:41:48.558659 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29495101-hnvf7_ae33dd87-375b-4069-ae8c-5135ec7f8fe9/keystone-cron/0.log" Jan 29 17:41:48 crc kubenswrapper[4895]: I0129 17:41:48.727531 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_008b84dd-8bf0-440a-bde9-4bbc0ab1b412/kube-state-metrics/0.log" Jan 29 17:41:48 crc kubenswrapper[4895]: I0129 17:41:48.970123 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vnmm5_4729dc58-3e8c-421b-82f8-45a513c3559d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:49 crc kubenswrapper[4895]: E0129 17:41:49.037616 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:41:49 crc kubenswrapper[4895]: I0129 17:41:49.134755 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_c1411d2f-0b19-4ba7-bca3-0e19bfaa3002/manila-api-log/0.log" Jan 29 17:41:49 crc kubenswrapper[4895]: I0129 17:41:49.200450 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_c1411d2f-0b19-4ba7-bca3-0e19bfaa3002/manila-api/0.log" Jan 29 17:41:49 crc kubenswrapper[4895]: I0129 17:41:49.356456 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7f897d48fd-hgqsw_1971cd12-642f-4a58-917d-4dda4953854a/keystone-api/0.log" Jan 29 17:41:49 crc kubenswrapper[4895]: I0129 17:41:49.390313 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_ca80fce3-90df-492f-8819-1df2e246b1b5/manila-scheduler/0.log" Jan 29 17:41:49 crc kubenswrapper[4895]: I0129 17:41:49.410975 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_ca80fce3-90df-492f-8819-1df2e246b1b5/probe/0.log" Jan 29 17:41:49 crc kubenswrapper[4895]: I0129 17:41:49.571896 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_c45486c1-31e6-47ac-94aa-5da2c0edbbaf/manila-share/0.log" Jan 29 17:41:49 crc kubenswrapper[4895]: I0129 17:41:49.580835 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_c45486c1-31e6-47ac-94aa-5da2c0edbbaf/probe/0.log" Jan 29 17:41:50 crc kubenswrapper[4895]: I0129 17:41:50.127248 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d896887bc-zxx6j_658f67cc-4c62-4b84-9fba-60e98ece6389/neutron-httpd/0.log" Jan 29 17:41:50 crc kubenswrapper[4895]: I0129 17:41:50.157246 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6d896887bc-zxx6j_658f67cc-4c62-4b84-9fba-60e98ece6389/neutron-api/0.log" Jan 29 17:41:50 crc kubenswrapper[4895]: I0129 17:41:50.224508 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-x6hp7_cd5e91b8-4378-437f-be31-9a86a5fc2ef7/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:50 crc kubenswrapper[4895]: I0129 17:41:50.951076 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_a4c21cd5-3241-44a4-a189-025ef2084f9d/nova-cell0-conductor-conductor/0.log" Jan 29 17:41:51 crc kubenswrapper[4895]: I0129 17:41:51.240655 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1362e7b3-cc8c-4a47-a93f-f5e98cce6acd/nova-api-log/0.log" Jan 29 17:41:51 crc kubenswrapper[4895]: I0129 17:41:51.516327 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_55543556-df47-452c-8436-353ddc374f3f/nova-cell1-conductor-conductor/0.log" Jan 29 17:41:51 crc kubenswrapper[4895]: I0129 17:41:51.778768 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_bb7cd62a-8d8a-4f2e-a88d-2a028960477f/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 17:41:51 crc kubenswrapper[4895]: I0129 17:41:51.863169 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1362e7b3-cc8c-4a47-a93f-f5e98cce6acd/nova-api-api/0.log" Jan 29 17:41:51 crc kubenswrapper[4895]: I0129 17:41:51.965112 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nzns4_6a0a88bf-e09f-4ff3-bcdd-f9ac967335a7/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:52 crc kubenswrapper[4895]: I0129 17:41:52.200542 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7/nova-metadata-log/0.log" Jan 29 17:41:52 crc kubenswrapper[4895]: I0129 17:41:52.572517 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7b8e9bd4-5ccc-4cad-84c4-be15b2c180b1/nova-scheduler-scheduler/0.log" Jan 29 17:41:52 crc kubenswrapper[4895]: I0129 17:41:52.609583 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41493083-077b-4518-a749-48a27e14b2a7/mysql-bootstrap/0.log" Jan 29 17:41:52 crc kubenswrapper[4895]: I0129 17:41:52.805690 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41493083-077b-4518-a749-48a27e14b2a7/mysql-bootstrap/0.log" Jan 29 17:41:52 crc kubenswrapper[4895]: I0129 17:41:52.830719 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_41493083-077b-4518-a749-48a27e14b2a7/galera/0.log" Jan 29 17:41:53 crc kubenswrapper[4895]: I0129 17:41:53.033845 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ef5d7b98-96fe-49e9-ba5b-f662a93ce514/mysql-bootstrap/0.log" Jan 29 17:41:53 crc kubenswrapper[4895]: I0129 17:41:53.190770 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ef5d7b98-96fe-49e9-ba5b-f662a93ce514/mysql-bootstrap/0.log" Jan 29 17:41:53 crc kubenswrapper[4895]: I0129 17:41:53.267907 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ef5d7b98-96fe-49e9-ba5b-f662a93ce514/galera/0.log" Jan 29 17:41:53 crc kubenswrapper[4895]: I0129 17:41:53.483054 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d50b4deb-a738-4cac-9481-b4085086c116/openstackclient/0.log" Jan 29 17:41:53 crc kubenswrapper[4895]: I0129 17:41:53.662603 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7h6nr_37ab7a53-0bcb-4f36-baa2-8d125d379bd3/ovn-controller/0.log" Jan 29 17:41:53 crc kubenswrapper[4895]: I0129 17:41:53.820414 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bfpw2_b973acd5-f963-43d9-8797-dece98571fa9/openstack-network-exporter/0.log" Jan 29 17:41:54 crc kubenswrapper[4895]: I0129 17:41:54.030969 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9d8jr_1b3f9699-0154-45bb-a444-85cc44faac88/ovsdb-server-init/0.log" Jan 29 17:41:54 crc kubenswrapper[4895]: I0129 17:41:54.246564 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9d8jr_1b3f9699-0154-45bb-a444-85cc44faac88/ovs-vswitchd/0.log" Jan 29 17:41:54 crc kubenswrapper[4895]: I0129 17:41:54.278066 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9d8jr_1b3f9699-0154-45bb-a444-85cc44faac88/ovsdb-server-init/0.log" Jan 29 17:41:54 crc kubenswrapper[4895]: I0129 17:41:54.433162 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9d8jr_1b3f9699-0154-45bb-a444-85cc44faac88/ovsdb-server/0.log" Jan 29 17:41:54 crc kubenswrapper[4895]: I0129 17:41:54.476632 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ac1eaec6-ae6c-40bd-b790-7e1fe9c8f0f7/nova-metadata-metadata/0.log" Jan 29 17:41:54 crc kubenswrapper[4895]: I0129 17:41:54.664351 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-t8zxt_27c14832-5dad-4502-ac5a-5f2cd24d7874/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:54 crc kubenswrapper[4895]: I0129 17:41:54.670451 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1/openstack-network-exporter/0.log" Jan 29 17:41:54 crc kubenswrapper[4895]: I0129 17:41:54.868935 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e9ecd56-fb40-49e0-ae7a-7a8fe4a083c1/ovn-northd/0.log" Jan 29 17:41:54 crc kubenswrapper[4895]: I0129 17:41:54.894030 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_896fe284-9834-4c99-b82f-1f13cb4b3857/openstack-network-exporter/0.log" Jan 29 17:41:54 crc kubenswrapper[4895]: I0129 17:41:54.910996 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_e241f959-4c63-420a-9ab9-988ce0f2a46a/cinder-volume/0.log" Jan 29 17:41:55 crc kubenswrapper[4895]: I0129 17:41:55.047163 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_896fe284-9834-4c99-b82f-1f13cb4b3857/ovsdbserver-nb/0.log" Jan 29 17:41:55 crc kubenswrapper[4895]: I0129 17:41:55.111197 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1ffe9927-329d-4120-b676-a27782b60e94/openstack-network-exporter/0.log" Jan 29 17:41:55 crc kubenswrapper[4895]: I0129 17:41:55.185467 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1ffe9927-329d-4120-b676-a27782b60e94/ovsdbserver-sb/0.log" Jan 29 17:41:55 crc kubenswrapper[4895]: I0129 17:41:55.370819 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f656cd776-2tcds_49a0b65b-0a7a-4681-9f06-e1a411e1e8d3/placement-api/0.log" Jan 29 17:41:55 crc kubenswrapper[4895]: I0129 17:41:55.440236 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7f656cd776-2tcds_49a0b65b-0a7a-4681-9f06-e1a411e1e8d3/placement-log/0.log" Jan 29 17:41:55 crc kubenswrapper[4895]: I0129 17:41:55.577372 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e928bf68-d1d2-4d90-b479-f589568e5145/setup-container/0.log" Jan 29 17:41:55 crc kubenswrapper[4895]: I0129 17:41:55.721372 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e928bf68-d1d2-4d90-b479-f589568e5145/rabbitmq/0.log" Jan 29 17:41:55 crc kubenswrapper[4895]: I0129 17:41:55.732583 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e928bf68-d1d2-4d90-b479-f589568e5145/setup-container/0.log" Jan 29 17:41:55 crc kubenswrapper[4895]: I0129 17:41:55.752334 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d22c33fb-a278-483f-ae02-d85d04ac9381/memcached/0.log" Jan 29 17:41:55 crc kubenswrapper[4895]: I0129 17:41:55.778397 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6d483294-14b5-4b14-8e09-e88d4d83a359/setup-container/0.log" Jan 29 17:41:55 crc kubenswrapper[4895]: I0129 17:41:55.993133 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6d483294-14b5-4b14-8e09-e88d4d83a359/setup-container/0.log" Jan 29 17:41:56 crc kubenswrapper[4895]: I0129 17:41:56.007676 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jpcp2_a521de95-49f8-451c-9d1f-0e938e4c3aa5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:56 crc kubenswrapper[4895]: I0129 17:41:56.012288 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6d483294-14b5-4b14-8e09-e88d4d83a359/rabbitmq/0.log" Jan 29 17:41:56 crc kubenswrapper[4895]: I0129 17:41:56.162022 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-gmmdq_5155d24f-53de-4346-bd5f-a5ba690d1a6d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:56 crc kubenswrapper[4895]: I0129 17:41:56.205194 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9gqfn_36973575-f9d7-4d47-b222-5072acf5317d/ssh-known-hosts-edpm-deployment/0.log" Jan 29 17:41:56 crc kubenswrapper[4895]: I0129 17:41:56.221372 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-qm4c7_75cbd8da-e8b9-4d15-b092-e0bb97e177d0/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:56 crc kubenswrapper[4895]: I0129 17:41:56.373452 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_bd9cf013-3c56-469b-8321-8937d5919276/test-operator-logs-container/0.log" Jan 29 17:41:56 crc kubenswrapper[4895]: I0129 17:41:56.522616 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_fa7221ee-55be-4a14-8149-7299f46d1f0d/tempest-tests-tempest-tests-runner/0.log" Jan 29 17:41:56 crc kubenswrapper[4895]: I0129 17:41:56.626389 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-xhrcn_3e9466f5-f2f5-43f4-9347-84084177d1df/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 17:41:58 crc kubenswrapper[4895]: I0129 17:41:58.037402 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:41:58 crc kubenswrapper[4895]: E0129 17:41:58.037799 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:42:01 crc kubenswrapper[4895]: E0129 17:42:01.040621 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:42:01 crc kubenswrapper[4895]: E0129 17:42:01.176228 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:42:01 crc kubenswrapper[4895]: E0129 17:42:01.176377 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2862q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rgndp_openshift-marketplace(ff147090-376c-429b-a465-a41d2772de00): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:42:01 crc kubenswrapper[4895]: E0129 17:42:01.177493 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:42:11 crc kubenswrapper[4895]: I0129 17:42:11.036989 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:42:11 crc kubenswrapper[4895]: E0129 17:42:11.037732 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:42:15 crc kubenswrapper[4895]: E0129 17:42:15.040087 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:42:15 crc kubenswrapper[4895]: E0129 17:42:15.040105 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:42:19 crc kubenswrapper[4895]: I0129 17:42:19.869716 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-6hjs8_f5589d31-28a0-45e8-a3fa-9b48576c81fc/manager/0.log" Jan 29 17:42:20 crc kubenswrapper[4895]: I0129 17:42:20.106623 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-l6jpv_6377326a-b83d-43f6-bb58-fcf54eac8ac2/manager/0.log" Jan 29 17:42:20 crc kubenswrapper[4895]: I0129 17:42:20.230942 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz_fe18f98b-f291-4ce0-bd4a-52f356c5b910/util/0.log" Jan 29 17:42:20 crc kubenswrapper[4895]: I0129 17:42:20.467897 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz_fe18f98b-f291-4ce0-bd4a-52f356c5b910/util/0.log" Jan 29 17:42:20 crc kubenswrapper[4895]: I0129 17:42:20.480656 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz_fe18f98b-f291-4ce0-bd4a-52f356c5b910/pull/0.log" Jan 29 17:42:20 crc kubenswrapper[4895]: I0129 17:42:20.518814 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz_fe18f98b-f291-4ce0-bd4a-52f356c5b910/pull/0.log" Jan 29 17:42:20 crc kubenswrapper[4895]: I0129 17:42:20.703893 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz_fe18f98b-f291-4ce0-bd4a-52f356c5b910/pull/0.log" Jan 29 17:42:20 crc kubenswrapper[4895]: I0129 17:42:20.748968 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz_fe18f98b-f291-4ce0-bd4a-52f356c5b910/extract/0.log" Jan 29 17:42:20 crc kubenswrapper[4895]: I0129 17:42:20.750447 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f6558529c1ff3ec2968cd6855782f72333b4c1e9d76896cc634a68915bqzdvz_fe18f98b-f291-4ce0-bd4a-52f356c5b910/util/0.log" Jan 29 17:42:21 crc kubenswrapper[4895]: I0129 17:42:21.078176 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-flr48_928d4ebe-bbab-4956-9d41-a6ef3c91e62d/manager/0.log" Jan 29 17:42:21 crc kubenswrapper[4895]: I0129 17:42:21.206191 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-qwjw9_0c18b9fd-01fa-4be9-be45-5ad49240591a/manager/0.log" Jan 29 17:42:21 crc kubenswrapper[4895]: I0129 17:42:21.327341 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-vwdtk_e605b5cd-74f0-4c19-b7e7-9f726595eeb5/manager/0.log" Jan 29 17:42:21 crc kubenswrapper[4895]: I0129 17:42:21.388467 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-bf99b56bc-929qv_1e62c1e2-2a44-4985-a787-ad3cfaa3ba5d/manager/0.log" Jan 29 17:42:21 crc kubenswrapper[4895]: I0129 17:42:21.579942 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-46kvb_a8dbc4ca-5e45-424e-aa1f-6c7e9e24e74c/manager/0.log" Jan 29 17:42:21 crc kubenswrapper[4895]: I0129 17:42:21.742413 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-pd4hv_5a7c85c3-b835-48f5-99ca-2c2949ab85bf/manager/0.log" Jan 29 17:42:21 crc kubenswrapper[4895]: I0129 17:42:21.838297 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-29rf4_2fb0cdc6-64b5-432f-a998-26174db87dbb/manager/0.log" Jan 29 17:42:21 crc kubenswrapper[4895]: I0129 17:42:21.931317 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-96st6_0458d64d-6cee-41f7-bb2d-17fe71893b95/manager/0.log" Jan 29 17:42:22 crc kubenswrapper[4895]: I0129 17:42:22.061765 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-l6b5x_53136333-31ce-4a3c-9477-0dde82bc7ec0/manager/0.log" Jan 29 17:42:22 crc kubenswrapper[4895]: I0129 17:42:22.174914 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-4v7lq_ebaae8a3-53e7-4aec-88cb-9723acd3350d/manager/0.log" Jan 29 17:42:22 crc kubenswrapper[4895]: I0129 17:42:22.297223 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-cmhwf_14c9beca-1f3d-42cb-91d2-f7e391a9761a/manager/0.log" Jan 29 17:42:22 crc kubenswrapper[4895]: I0129 17:42:22.411201 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-jndgk_826ca63d-ce7b-4d52-9fb5-31bdbb523416/manager/0.log" Jan 29 17:42:22 crc kubenswrapper[4895]: I0129 17:42:22.447935 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dzg8hh_861c2b0f-fa28-408a-b270-a7e1f9ee57e2/manager/0.log" Jan 29 17:42:22 crc kubenswrapper[4895]: I0129 17:42:22.774564 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-f9fb88ddf-v4wc7_f360f1f7-fabd-4268-88e2-ef3fb4e88a9b/operator/0.log" Jan 29 17:42:22 crc kubenswrapper[4895]: I0129 17:42:22.879948 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-vd26w_c1fe06a9-7c3c-4541-b7e9-ed083b22d775/registry-server/0.log" Jan 29 17:42:23 crc kubenswrapper[4895]: I0129 17:42:23.047444 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-2xzrq_15381bb6-d539-48b4-976c-5b2a27fa7aaa/manager/0.log" Jan 29 17:42:23 crc kubenswrapper[4895]: I0129 17:42:23.191753 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-b4wtv_dbd2491c-2587-47d3-8201-26b8e68bfcb7/manager/0.log" Jan 29 17:42:23 crc kubenswrapper[4895]: I0129 17:42:23.330940 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dbmk4_7b0b2050-5cdb-44e0-a858-bf6aa331d2c6/operator/0.log" Jan 29 17:42:23 crc kubenswrapper[4895]: I0129 17:42:23.510075 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-5z28c_3c7eb208-8a46-49ed-8efd-1b0fceabd3c8/manager/0.log" Jan 29 17:42:23 crc kubenswrapper[4895]: I0129 17:42:23.777488 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-f7b6w_91b335a1-04fb-48e6-bf93-8bce6c4da648/manager/0.log" Jan 29 17:42:23 crc kubenswrapper[4895]: I0129 17:42:23.797457 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-8vvcr_9730658e-c8ca-4448-a3c0-68116c92840f/manager/0.log" Jan 29 17:42:23 crc kubenswrapper[4895]: I0129 17:42:23.852644 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6dbd47d457-rqtbg_7c8841b5-eefc-4ce3-bb5b-111a252e4316/manager/0.log" Jan 29 17:42:23 crc kubenswrapper[4895]: I0129 17:42:23.932220 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-vcgf8_d3b8a7bf-6741-4bfe-8835-e942e688098d/manager/0.log" Jan 29 17:42:24 crc kubenswrapper[4895]: I0129 17:42:24.036858 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:42:24 crc kubenswrapper[4895]: E0129 17:42:24.037120 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:42:27 crc kubenswrapper[4895]: E0129 17:42:27.045794 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:42:30 crc kubenswrapper[4895]: E0129 17:42:30.169708 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:42:30 crc kubenswrapper[4895]: E0129 17:42:30.170287 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm5mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vzzds_openshift-marketplace(4c4f588f-f4f6-4000-9919-8461a6af64a3): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:42:30 crc kubenswrapper[4895]: E0129 17:42:30.171481 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:42:35 crc kubenswrapper[4895]: I0129 17:42:35.037407 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:42:35 crc kubenswrapper[4895]: E0129 17:42:35.038268 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:42:41 crc kubenswrapper[4895]: E0129 17:42:41.039308 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:42:41 crc kubenswrapper[4895]: I0129 17:42:41.786950 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dp6dw_c2b62e00-db4e-4ce9-8dd1-717159043f83/control-plane-machine-set-operator/0.log" Jan 29 17:42:41 crc kubenswrapper[4895]: I0129 17:42:41.994074 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cv44f_2c9ef793-c9ca-4c0a-9ab0-09115c564646/kube-rbac-proxy/0.log" Jan 29 17:42:41 crc kubenswrapper[4895]: I0129 17:42:41.994609 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cv44f_2c9ef793-c9ca-4c0a-9ab0-09115c564646/machine-api-operator/0.log" Jan 29 17:42:42 crc kubenswrapper[4895]: E0129 17:42:42.038621 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:42:46 crc kubenswrapper[4895]: I0129 17:42:46.226492 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:42:46 crc kubenswrapper[4895]: E0129 17:42:46.227563 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:42:53 crc kubenswrapper[4895]: E0129 17:42:53.040519 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:42:54 crc kubenswrapper[4895]: I0129 17:42:54.993019 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-5qzkz_18e923cd-60ff-4beb-8e93-52e824bfd999/cert-manager-controller/0.log" Jan 29 17:42:55 crc kubenswrapper[4895]: I0129 17:42:55.171892 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rfvkt_14bf5370-0eb3-41bf-a14a-0115f945a9bb/cert-manager-cainjector/0.log" Jan 29 17:42:55 crc kubenswrapper[4895]: I0129 17:42:55.213508 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-h2qtn_15590504-595f-4b06-a0a1-5f25e83967ec/cert-manager-webhook/0.log" Jan 29 17:42:56 crc kubenswrapper[4895]: E0129 17:42:56.039330 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:42:59 crc kubenswrapper[4895]: I0129 17:42:59.038622 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:42:59 crc kubenswrapper[4895]: E0129 17:42:59.039397 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:43:05 crc kubenswrapper[4895]: E0129 17:43:05.039279 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:43:09 crc kubenswrapper[4895]: I0129 17:43:09.058940 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-cl5vs_adf5441b-6337-49f1-992c-00ae9c9180b4/nmstate-console-plugin/0.log" Jan 29 17:43:09 crc kubenswrapper[4895]: I0129 17:43:09.200840 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-59nh2_d1ce6e0d-a5d7-4d97-b029-9580c624ff4c/nmstate-handler/0.log" Jan 29 17:43:09 crc kubenswrapper[4895]: I0129 17:43:09.289427 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-w28gt_48a117fe-e939-4f89-8c46-c6b16e209948/kube-rbac-proxy/0.log" Jan 29 17:43:09 crc kubenswrapper[4895]: I0129 17:43:09.352222 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-w28gt_48a117fe-e939-4f89-8c46-c6b16e209948/nmstate-metrics/0.log" Jan 29 17:43:09 crc kubenswrapper[4895]: I0129 17:43:09.457337 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-wps6n_a3a795fb-ebbd-463c-8aae-317aafd133f8/nmstate-operator/0.log" Jan 29 17:43:09 crc kubenswrapper[4895]: I0129 17:43:09.516199 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-757pr_c943c1d4-650a-46b7-805e-d89160518569/nmstate-webhook/0.log" Jan 29 17:43:10 crc kubenswrapper[4895]: I0129 17:43:10.038694 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:43:10 crc kubenswrapper[4895]: E0129 17:43:10.039086 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:43:10 crc kubenswrapper[4895]: E0129 17:43:10.041593 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" Jan 29 17:43:17 crc kubenswrapper[4895]: E0129 17:43:17.046339 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:43:23 crc kubenswrapper[4895]: I0129 17:43:23.834805 4895 generic.go:334] "Generic (PLEG): container finished" podID="ff147090-376c-429b-a465-a41d2772de00" containerID="568d9b0bbd005b0dfb2a2bab0a6a40148464e6dd9041d76c236f84b8f3e20697" exitCode=0 Jan 29 17:43:23 crc kubenswrapper[4895]: I0129 17:43:23.834886 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgndp" event={"ID":"ff147090-376c-429b-a465-a41d2772de00","Type":"ContainerDied","Data":"568d9b0bbd005b0dfb2a2bab0a6a40148464e6dd9041d76c236f84b8f3e20697"} Jan 29 17:43:24 crc kubenswrapper[4895]: I0129 17:43:24.848702 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgndp" event={"ID":"ff147090-376c-429b-a465-a41d2772de00","Type":"ContainerStarted","Data":"b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa"} Jan 29 17:43:24 crc kubenswrapper[4895]: I0129 17:43:24.872440 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgndp" podStartSLOduration=2.810645977 podStartE2EDuration="2m59.872397609s" podCreationTimestamp="2026-01-29 17:40:25 +0000 UTC" firstStartedPulling="2026-01-29 17:40:27.211008764 +0000 UTC m=+5311.013986028" lastFinishedPulling="2026-01-29 17:43:24.272760386 +0000 UTC m=+5488.075737660" observedRunningTime="2026-01-29 17:43:24.865791531 +0000 UTC m=+5488.668768795" watchObservedRunningTime="2026-01-29 17:43:24.872397609 +0000 UTC m=+5488.675374873" Jan 29 17:43:25 crc kubenswrapper[4895]: I0129 17:43:25.036777 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:43:25 crc kubenswrapper[4895]: E0129 17:43:25.037041 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:43:26 crc kubenswrapper[4895]: I0129 17:43:26.304020 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:43:26 crc kubenswrapper[4895]: I0129 17:43:26.305167 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:43:26 crc kubenswrapper[4895]: I0129 17:43:26.360923 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:43:31 crc kubenswrapper[4895]: E0129 17:43:31.039846 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:43:36 crc kubenswrapper[4895]: I0129 17:43:36.357127 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:43:36 crc kubenswrapper[4895]: I0129 17:43:36.424096 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-54m64_22149e2b-6b31-4bfe-930d-e14cf24aefb1/kube-rbac-proxy/0.log" Jan 29 17:43:36 crc kubenswrapper[4895]: I0129 17:43:36.443085 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-54m64_22149e2b-6b31-4bfe-930d-e14cf24aefb1/controller/0.log" Jan 29 17:43:36 crc kubenswrapper[4895]: I0129 17:43:36.607399 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-frr-files/0.log" Jan 29 17:43:36 crc kubenswrapper[4895]: I0129 17:43:36.787640 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-frr-files/0.log" Jan 29 17:43:36 crc kubenswrapper[4895]: I0129 17:43:36.799832 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-metrics/0.log" Jan 29 17:43:36 crc kubenswrapper[4895]: I0129 17:43:36.802189 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-reloader/0.log" Jan 29 17:43:36 crc kubenswrapper[4895]: I0129 17:43:36.842911 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-reloader/0.log" Jan 29 17:43:36 crc kubenswrapper[4895]: I0129 17:43:36.995330 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-metrics/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.000384 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-reloader/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.001610 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-frr-files/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.005946 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-metrics/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.042543 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:43:37 crc kubenswrapper[4895]: E0129 17:43:37.042924 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.093708 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgndp"] Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.093934 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgndp" podUID="ff147090-376c-429b-a465-a41d2772de00" containerName="registry-server" containerID="cri-o://b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa" gracePeriod=2 Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.187917 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-reloader/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.195726 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-frr-files/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.211259 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/cp-metrics/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.233299 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/controller/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.401531 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/frr-metrics/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.415553 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/kube-rbac-proxy-frr/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.416123 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/kube-rbac-proxy/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.542312 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.604363 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/reloader/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.661504 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-4kq2k_4ba6715b-7048-450e-a391-7f51a11087a2/frr-k8s-webhook-server/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.710558 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2862q\" (UniqueName: \"kubernetes.io/projected/ff147090-376c-429b-a465-a41d2772de00-kube-api-access-2862q\") pod \"ff147090-376c-429b-a465-a41d2772de00\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.710598 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-catalog-content\") pod \"ff147090-376c-429b-a465-a41d2772de00\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.710657 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-utilities\") pod \"ff147090-376c-429b-a465-a41d2772de00\" (UID: \"ff147090-376c-429b-a465-a41d2772de00\") " Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.711477 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-utilities" (OuterVolumeSpecName: "utilities") pod "ff147090-376c-429b-a465-a41d2772de00" (UID: "ff147090-376c-429b-a465-a41d2772de00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.732074 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff147090-376c-429b-a465-a41d2772de00-kube-api-access-2862q" (OuterVolumeSpecName: "kube-api-access-2862q") pod "ff147090-376c-429b-a465-a41d2772de00" (UID: "ff147090-376c-429b-a465-a41d2772de00"). InnerVolumeSpecName "kube-api-access-2862q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.733630 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff147090-376c-429b-a465-a41d2772de00" (UID: "ff147090-376c-429b-a465-a41d2772de00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.812495 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2862q\" (UniqueName: \"kubernetes.io/projected/ff147090-376c-429b-a465-a41d2772de00-kube-api-access-2862q\") on node \"crc\" DevicePath \"\"" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.812524 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.812535 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff147090-376c-429b-a465-a41d2772de00-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.820443 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b596b8569-nv8c6_2563d162-e755-47fe-9b15-4975ece29fb2/manager/0.log" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.968312 4895 generic.go:334] "Generic (PLEG): container finished" podID="ff147090-376c-429b-a465-a41d2772de00" containerID="b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa" exitCode=0 Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.968369 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgndp" event={"ID":"ff147090-376c-429b-a465-a41d2772de00","Type":"ContainerDied","Data":"b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa"} Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.968378 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgndp" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.968404 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgndp" event={"ID":"ff147090-376c-429b-a465-a41d2772de00","Type":"ContainerDied","Data":"b5ee68f8f5200f80b5abae19bb1acf72c27392f474ac5d4bd7d996017b643acb"} Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.968428 4895 scope.go:117] "RemoveContainer" containerID="b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa" Jan 29 17:43:37 crc kubenswrapper[4895]: I0129 17:43:37.991846 4895 scope.go:117] "RemoveContainer" containerID="568d9b0bbd005b0dfb2a2bab0a6a40148464e6dd9041d76c236f84b8f3e20697" Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.006041 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgndp"] Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.015423 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7bfcf9699b-f8jbx_e900fb6a-e8f0-4fff-8873-a109732c4bc1/webhook-server/0.log" Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.017115 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgndp"] Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.024826 4895 scope.go:117] "RemoveContainer" containerID="6e5b392bf23bb0d8aee0bd0ce42d3e2bb758c3f5af81dd767233575144988ddf" Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.063324 4895 scope.go:117] "RemoveContainer" containerID="b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa" Jan 29 17:43:38 crc kubenswrapper[4895]: E0129 17:43:38.063728 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa\": container with ID starting with b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa not found: ID does not exist" containerID="b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa" Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.063821 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa"} err="failed to get container status \"b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa\": rpc error: code = NotFound desc = could not find container \"b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa\": container with ID starting with b0ac2df6fd6473ae1dd8f2abae9a688a3e787323813ab50fad4c2c689b44f4aa not found: ID does not exist" Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.063923 4895 scope.go:117] "RemoveContainer" containerID="568d9b0bbd005b0dfb2a2bab0a6a40148464e6dd9041d76c236f84b8f3e20697" Jan 29 17:43:38 crc kubenswrapper[4895]: E0129 17:43:38.065012 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568d9b0bbd005b0dfb2a2bab0a6a40148464e6dd9041d76c236f84b8f3e20697\": container with ID starting with 568d9b0bbd005b0dfb2a2bab0a6a40148464e6dd9041d76c236f84b8f3e20697 not found: ID does not exist" containerID="568d9b0bbd005b0dfb2a2bab0a6a40148464e6dd9041d76c236f84b8f3e20697" Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.065090 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568d9b0bbd005b0dfb2a2bab0a6a40148464e6dd9041d76c236f84b8f3e20697"} err="failed to get container status \"568d9b0bbd005b0dfb2a2bab0a6a40148464e6dd9041d76c236f84b8f3e20697\": rpc error: code = NotFound desc = could not find container \"568d9b0bbd005b0dfb2a2bab0a6a40148464e6dd9041d76c236f84b8f3e20697\": container with ID starting with 568d9b0bbd005b0dfb2a2bab0a6a40148464e6dd9041d76c236f84b8f3e20697 not found: ID does not exist" Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.065168 4895 scope.go:117] "RemoveContainer" containerID="6e5b392bf23bb0d8aee0bd0ce42d3e2bb758c3f5af81dd767233575144988ddf" Jan 29 17:43:38 crc kubenswrapper[4895]: E0129 17:43:38.068322 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e5b392bf23bb0d8aee0bd0ce42d3e2bb758c3f5af81dd767233575144988ddf\": container with ID starting with 6e5b392bf23bb0d8aee0bd0ce42d3e2bb758c3f5af81dd767233575144988ddf not found: ID does not exist" containerID="6e5b392bf23bb0d8aee0bd0ce42d3e2bb758c3f5af81dd767233575144988ddf" Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.068628 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e5b392bf23bb0d8aee0bd0ce42d3e2bb758c3f5af81dd767233575144988ddf"} err="failed to get container status \"6e5b392bf23bb0d8aee0bd0ce42d3e2bb758c3f5af81dd767233575144988ddf\": rpc error: code = NotFound desc = could not find container \"6e5b392bf23bb0d8aee0bd0ce42d3e2bb758c3f5af81dd767233575144988ddf\": container with ID starting with 6e5b392bf23bb0d8aee0bd0ce42d3e2bb758c3f5af81dd767233575144988ddf not found: ID does not exist" Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.112507 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bcds8_1ca74760-05c3-41f7-aafa-4e20a1021102/kube-rbac-proxy/0.log" Jan 29 17:43:38 crc kubenswrapper[4895]: I0129 17:43:38.658768 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-bcds8_1ca74760-05c3-41f7-aafa-4e20a1021102/speaker/0.log" Jan 29 17:43:39 crc kubenswrapper[4895]: I0129 17:43:39.046552 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff147090-376c-429b-a465-a41d2772de00" path="/var/lib/kubelet/pods/ff147090-376c-429b-a465-a41d2772de00/volumes" Jan 29 17:43:39 crc kubenswrapper[4895]: I0129 17:43:39.082494 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-89f9w_88eaedc5-b046-4629-a360-92edd0bb09e1/frr/0.log" Jan 29 17:43:42 crc kubenswrapper[4895]: E0129 17:43:42.038835 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:43:50 crc kubenswrapper[4895]: I0129 17:43:50.036977 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:43:50 crc kubenswrapper[4895]: E0129 17:43:50.039457 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:43:51 crc kubenswrapper[4895]: I0129 17:43:51.300008 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm_498f46aa-3aec-486a-99bc-585c811a12c6/util/0.log" Jan 29 17:43:51 crc kubenswrapper[4895]: I0129 17:43:51.452103 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm_498f46aa-3aec-486a-99bc-585c811a12c6/util/0.log" Jan 29 17:43:51 crc kubenswrapper[4895]: I0129 17:43:51.481548 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm_498f46aa-3aec-486a-99bc-585c811a12c6/pull/0.log" Jan 29 17:43:51 crc kubenswrapper[4895]: I0129 17:43:51.497753 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm_498f46aa-3aec-486a-99bc-585c811a12c6/pull/0.log" Jan 29 17:43:51 crc kubenswrapper[4895]: I0129 17:43:51.629177 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm_498f46aa-3aec-486a-99bc-585c811a12c6/util/0.log" Jan 29 17:43:51 crc kubenswrapper[4895]: I0129 17:43:51.681184 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm_498f46aa-3aec-486a-99bc-585c811a12c6/extract/0.log" Jan 29 17:43:51 crc kubenswrapper[4895]: I0129 17:43:51.688742 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfrxcm_498f46aa-3aec-486a-99bc-585c811a12c6/pull/0.log" Jan 29 17:43:51 crc kubenswrapper[4895]: I0129 17:43:51.831948 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g_a9810d12-a970-4769-ae18-6147ea348121/util/0.log" Jan 29 17:43:51 crc kubenswrapper[4895]: I0129 17:43:51.948682 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g_a9810d12-a970-4769-ae18-6147ea348121/util/0.log" Jan 29 17:43:51 crc kubenswrapper[4895]: I0129 17:43:51.982683 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g_a9810d12-a970-4769-ae18-6147ea348121/pull/0.log" Jan 29 17:43:51 crc kubenswrapper[4895]: I0129 17:43:51.992753 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g_a9810d12-a970-4769-ae18-6147ea348121/pull/0.log" Jan 29 17:43:52 crc kubenswrapper[4895]: I0129 17:43:52.132828 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g_a9810d12-a970-4769-ae18-6147ea348121/util/0.log" Jan 29 17:43:52 crc kubenswrapper[4895]: I0129 17:43:52.133829 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g_a9810d12-a970-4769-ae18-6147ea348121/pull/0.log" Jan 29 17:43:52 crc kubenswrapper[4895]: I0129 17:43:52.153236 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713pjm2g_a9810d12-a970-4769-ae18-6147ea348121/extract/0.log" Jan 29 17:43:52 crc kubenswrapper[4895]: I0129 17:43:52.299594 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p54q7_2765471a-1d69-49cb-8d07-753b572fe408/extract-utilities/0.log" Jan 29 17:43:52 crc kubenswrapper[4895]: I0129 17:43:52.448818 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p54q7_2765471a-1d69-49cb-8d07-753b572fe408/extract-content/0.log" Jan 29 17:43:52 crc kubenswrapper[4895]: I0129 17:43:52.455525 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p54q7_2765471a-1d69-49cb-8d07-753b572fe408/extract-content/0.log" Jan 29 17:43:52 crc kubenswrapper[4895]: I0129 17:43:52.470587 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p54q7_2765471a-1d69-49cb-8d07-753b572fe408/extract-utilities/0.log" Jan 29 17:43:52 crc kubenswrapper[4895]: I0129 17:43:52.649389 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p54q7_2765471a-1d69-49cb-8d07-753b572fe408/extract-utilities/0.log" Jan 29 17:43:52 crc kubenswrapper[4895]: I0129 17:43:52.665318 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p54q7_2765471a-1d69-49cb-8d07-753b572fe408/extract-content/0.log" Jan 29 17:43:52 crc kubenswrapper[4895]: I0129 17:43:52.805610 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fct4h_7ec8e528-cb81-403a-91e6-4dda3ece0f4e/extract-utilities/0.log" Jan 29 17:43:53 crc kubenswrapper[4895]: I0129 17:43:53.108577 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fct4h_7ec8e528-cb81-403a-91e6-4dda3ece0f4e/extract-content/0.log" Jan 29 17:43:53 crc kubenswrapper[4895]: I0129 17:43:53.114914 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fct4h_7ec8e528-cb81-403a-91e6-4dda3ece0f4e/extract-utilities/0.log" Jan 29 17:43:53 crc kubenswrapper[4895]: I0129 17:43:53.148467 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fct4h_7ec8e528-cb81-403a-91e6-4dda3ece0f4e/extract-content/0.log" Jan 29 17:43:53 crc kubenswrapper[4895]: I0129 17:43:53.251173 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-p54q7_2765471a-1d69-49cb-8d07-753b572fe408/registry-server/0.log" Jan 29 17:43:53 crc kubenswrapper[4895]: I0129 17:43:53.302656 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fct4h_7ec8e528-cb81-403a-91e6-4dda3ece0f4e/extract-utilities/0.log" Jan 29 17:43:53 crc kubenswrapper[4895]: I0129 17:43:53.315731 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fct4h_7ec8e528-cb81-403a-91e6-4dda3ece0f4e/extract-content/0.log" Jan 29 17:43:53 crc kubenswrapper[4895]: I0129 17:43:53.515918 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vzzds_4c4f588f-f4f6-4000-9919-8461a6af64a3/extract-utilities/0.log" Jan 29 17:43:53 crc kubenswrapper[4895]: I0129 17:43:53.643478 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vzzds_4c4f588f-f4f6-4000-9919-8461a6af64a3/extract-utilities/0.log" Jan 29 17:43:53 crc kubenswrapper[4895]: I0129 17:43:53.983568 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vzzds_4c4f588f-f4f6-4000-9919-8461a6af64a3/extract-utilities/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.055502 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fct4h_7ec8e528-cb81-403a-91e6-4dda3ece0f4e/registry-server/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.200253 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-scmbk_6c105ff6-00bb-4637-8e66-2f7899e80bdf/marketplace-operator/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.233513 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9l2w_a3b60df4-65e6-407a-b3ed-997271ae68b7/extract-utilities/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.374620 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9l2w_a3b60df4-65e6-407a-b3ed-997271ae68b7/extract-utilities/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.391301 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9l2w_a3b60df4-65e6-407a-b3ed-997271ae68b7/extract-content/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.395746 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9l2w_a3b60df4-65e6-407a-b3ed-997271ae68b7/extract-content/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.595954 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9l2w_a3b60df4-65e6-407a-b3ed-997271ae68b7/extract-utilities/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.644199 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9l2w_a3b60df4-65e6-407a-b3ed-997271ae68b7/extract-content/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.766476 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-q9l2w_a3b60df4-65e6-407a-b3ed-997271ae68b7/registry-server/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.805740 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q42rh_e3654e61-241e-4ea7-9b75-7f135d437ed5/extract-utilities/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.958393 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q42rh_e3654e61-241e-4ea7-9b75-7f135d437ed5/extract-content/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.961720 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q42rh_e3654e61-241e-4ea7-9b75-7f135d437ed5/extract-content/0.log" Jan 29 17:43:54 crc kubenswrapper[4895]: I0129 17:43:54.962648 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q42rh_e3654e61-241e-4ea7-9b75-7f135d437ed5/extract-utilities/0.log" Jan 29 17:43:55 crc kubenswrapper[4895]: I0129 17:43:55.118008 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q42rh_e3654e61-241e-4ea7-9b75-7f135d437ed5/extract-content/0.log" Jan 29 17:43:55 crc kubenswrapper[4895]: I0129 17:43:55.136373 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q42rh_e3654e61-241e-4ea7-9b75-7f135d437ed5/extract-utilities/0.log" Jan 29 17:43:55 crc kubenswrapper[4895]: I0129 17:43:55.698413 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q42rh_e3654e61-241e-4ea7-9b75-7f135d437ed5/registry-server/0.log" Jan 29 17:43:57 crc kubenswrapper[4895]: E0129 17:43:57.184585 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:43:57 crc kubenswrapper[4895]: E0129 17:43:57.185060 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm5mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vzzds_openshift-marketplace(4c4f588f-f4f6-4000-9919-8461a6af64a3): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:43:57 crc kubenswrapper[4895]: E0129 17:43:57.186789 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:44:02 crc kubenswrapper[4895]: I0129 17:44:02.036472 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:44:02 crc kubenswrapper[4895]: E0129 17:44:02.037252 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:44:08 crc kubenswrapper[4895]: E0129 17:44:08.039616 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:44:14 crc kubenswrapper[4895]: I0129 17:44:14.036815 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:44:14 crc kubenswrapper[4895]: E0129 17:44:14.038113 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:44:21 crc kubenswrapper[4895]: E0129 17:44:21.038568 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:44:27 crc kubenswrapper[4895]: I0129 17:44:27.042130 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:44:27 crc kubenswrapper[4895]: E0129 17:44:27.042924 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:44:34 crc kubenswrapper[4895]: E0129 17:44:34.040857 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:44:41 crc kubenswrapper[4895]: I0129 17:44:41.036983 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:44:41 crc kubenswrapper[4895]: E0129 17:44:41.037855 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:44:45 crc kubenswrapper[4895]: E0129 17:44:45.039302 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:44:56 crc kubenswrapper[4895]: I0129 17:44:56.037210 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:44:56 crc kubenswrapper[4895]: E0129 17:44:56.037955 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:44:58 crc kubenswrapper[4895]: E0129 17:44:58.042130 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.147844 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk"] Jan 29 17:45:00 crc kubenswrapper[4895]: E0129 17:45:00.148576 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff147090-376c-429b-a465-a41d2772de00" containerName="registry-server" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.148591 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff147090-376c-429b-a465-a41d2772de00" containerName="registry-server" Jan 29 17:45:00 crc kubenswrapper[4895]: E0129 17:45:00.148626 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff147090-376c-429b-a465-a41d2772de00" containerName="extract-content" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.148632 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff147090-376c-429b-a465-a41d2772de00" containerName="extract-content" Jan 29 17:45:00 crc kubenswrapper[4895]: E0129 17:45:00.148647 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff147090-376c-429b-a465-a41d2772de00" containerName="extract-utilities" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.148655 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff147090-376c-429b-a465-a41d2772de00" containerName="extract-utilities" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.148883 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff147090-376c-429b-a465-a41d2772de00" containerName="registry-server" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.149547 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.151326 4895 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.153057 4895 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.158307 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk"] Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.200062 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4754497-5986-48ab-a08c-873671d653c0-config-volume\") pod \"collect-profiles-29495145-8rgdk\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.200389 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4754497-5986-48ab-a08c-873671d653c0-secret-volume\") pod \"collect-profiles-29495145-8rgdk\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.200510 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlbl7\" (UniqueName: \"kubernetes.io/projected/e4754497-5986-48ab-a08c-873671d653c0-kube-api-access-tlbl7\") pod \"collect-profiles-29495145-8rgdk\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.301955 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlbl7\" (UniqueName: \"kubernetes.io/projected/e4754497-5986-48ab-a08c-873671d653c0-kube-api-access-tlbl7\") pod \"collect-profiles-29495145-8rgdk\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.302081 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4754497-5986-48ab-a08c-873671d653c0-config-volume\") pod \"collect-profiles-29495145-8rgdk\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.302137 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4754497-5986-48ab-a08c-873671d653c0-secret-volume\") pod \"collect-profiles-29495145-8rgdk\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.303064 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4754497-5986-48ab-a08c-873671d653c0-config-volume\") pod \"collect-profiles-29495145-8rgdk\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.320716 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4754497-5986-48ab-a08c-873671d653c0-secret-volume\") pod \"collect-profiles-29495145-8rgdk\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.339544 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlbl7\" (UniqueName: \"kubernetes.io/projected/e4754497-5986-48ab-a08c-873671d653c0-kube-api-access-tlbl7\") pod \"collect-profiles-29495145-8rgdk\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.475389 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:00 crc kubenswrapper[4895]: I0129 17:45:00.906332 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk"] Jan 29 17:45:01 crc kubenswrapper[4895]: I0129 17:45:01.712443 4895 generic.go:334] "Generic (PLEG): container finished" podID="e4754497-5986-48ab-a08c-873671d653c0" containerID="23c262f053317243fa9adcd6894a315a6f55def0a985f8eea6662397697063d4" exitCode=0 Jan 29 17:45:01 crc kubenswrapper[4895]: I0129 17:45:01.712536 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" event={"ID":"e4754497-5986-48ab-a08c-873671d653c0","Type":"ContainerDied","Data":"23c262f053317243fa9adcd6894a315a6f55def0a985f8eea6662397697063d4"} Jan 29 17:45:01 crc kubenswrapper[4895]: I0129 17:45:01.712779 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" event={"ID":"e4754497-5986-48ab-a08c-873671d653c0","Type":"ContainerStarted","Data":"837303ebe4f7d94932dab8e276b814a85b4c8cc6fff9c8db97d69fe676b157a0"} Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.028018 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.162326 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4754497-5986-48ab-a08c-873671d653c0-config-volume\") pod \"e4754497-5986-48ab-a08c-873671d653c0\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.162468 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlbl7\" (UniqueName: \"kubernetes.io/projected/e4754497-5986-48ab-a08c-873671d653c0-kube-api-access-tlbl7\") pod \"e4754497-5986-48ab-a08c-873671d653c0\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.162516 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4754497-5986-48ab-a08c-873671d653c0-secret-volume\") pod \"e4754497-5986-48ab-a08c-873671d653c0\" (UID: \"e4754497-5986-48ab-a08c-873671d653c0\") " Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.163006 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4754497-5986-48ab-a08c-873671d653c0-config-volume" (OuterVolumeSpecName: "config-volume") pod "e4754497-5986-48ab-a08c-873671d653c0" (UID: "e4754497-5986-48ab-a08c-873671d653c0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.163659 4895 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4754497-5986-48ab-a08c-873671d653c0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.182495 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4754497-5986-48ab-a08c-873671d653c0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e4754497-5986-48ab-a08c-873671d653c0" (UID: "e4754497-5986-48ab-a08c-873671d653c0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.182532 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4754497-5986-48ab-a08c-873671d653c0-kube-api-access-tlbl7" (OuterVolumeSpecName: "kube-api-access-tlbl7") pod "e4754497-5986-48ab-a08c-873671d653c0" (UID: "e4754497-5986-48ab-a08c-873671d653c0"). InnerVolumeSpecName "kube-api-access-tlbl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.265438 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlbl7\" (UniqueName: \"kubernetes.io/projected/e4754497-5986-48ab-a08c-873671d653c0-kube-api-access-tlbl7\") on node \"crc\" DevicePath \"\"" Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.265469 4895 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4754497-5986-48ab-a08c-873671d653c0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.729235 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" event={"ID":"e4754497-5986-48ab-a08c-873671d653c0","Type":"ContainerDied","Data":"837303ebe4f7d94932dab8e276b814a85b4c8cc6fff9c8db97d69fe676b157a0"} Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.729276 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495145-8rgdk" Jan 29 17:45:03 crc kubenswrapper[4895]: I0129 17:45:03.729287 4895 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="837303ebe4f7d94932dab8e276b814a85b4c8cc6fff9c8db97d69fe676b157a0" Jan 29 17:45:04 crc kubenswrapper[4895]: I0129 17:45:04.102219 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq"] Jan 29 17:45:04 crc kubenswrapper[4895]: I0129 17:45:04.111665 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-wc5xq"] Jan 29 17:45:05 crc kubenswrapper[4895]: I0129 17:45:05.049513 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec083dbe-3c00-420b-b71a-d56c57270ab6" path="/var/lib/kubelet/pods/ec083dbe-3c00-420b-b71a-d56c57270ab6/volumes" Jan 29 17:45:07 crc kubenswrapper[4895]: I0129 17:45:07.047543 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:45:07 crc kubenswrapper[4895]: I0129 17:45:07.764624 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"9abf8269a8b6962441637fc80f7fa0841b4aff7d61853ac695970a0d5db60dc1"} Jan 29 17:45:11 crc kubenswrapper[4895]: E0129 17:45:11.042684 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:45:20 crc kubenswrapper[4895]: I0129 17:45:20.552643 4895 scope.go:117] "RemoveContainer" containerID="38b0ecb61c46b95a6f2a8ed5b66ca13e2b6528e7c4e5cb9b1abb4836d9ad0101" Jan 29 17:45:22 crc kubenswrapper[4895]: E0129 17:45:22.038820 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:45:34 crc kubenswrapper[4895]: E0129 17:45:34.041188 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:45:44 crc kubenswrapper[4895]: I0129 17:45:44.173618 4895 generic.go:334] "Generic (PLEG): container finished" podID="eaa16f29-f14b-4ce4-bf46-2660a43de5fd" containerID="c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191" exitCode=0 Jan 29 17:45:44 crc kubenswrapper[4895]: I0129 17:45:44.173691 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kzbmj/must-gather-j9jb7" event={"ID":"eaa16f29-f14b-4ce4-bf46-2660a43de5fd","Type":"ContainerDied","Data":"c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191"} Jan 29 17:45:44 crc kubenswrapper[4895]: I0129 17:45:44.175624 4895 scope.go:117] "RemoveContainer" containerID="c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191" Jan 29 17:45:45 crc kubenswrapper[4895]: E0129 17:45:45.040855 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:45:45 crc kubenswrapper[4895]: I0129 17:45:45.152028 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kzbmj_must-gather-j9jb7_eaa16f29-f14b-4ce4-bf46-2660a43de5fd/gather/0.log" Jan 29 17:45:52 crc kubenswrapper[4895]: I0129 17:45:52.876075 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kzbmj/must-gather-j9jb7"] Jan 29 17:45:52 crc kubenswrapper[4895]: I0129 17:45:52.877302 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kzbmj/must-gather-j9jb7" podUID="eaa16f29-f14b-4ce4-bf46-2660a43de5fd" containerName="copy" containerID="cri-o://3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec" gracePeriod=2 Jan 29 17:45:52 crc kubenswrapper[4895]: I0129 17:45:52.887791 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kzbmj/must-gather-j9jb7"] Jan 29 17:45:53 crc kubenswrapper[4895]: E0129 17:45:53.088741 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa16f29_f14b_4ce4_bf46_2660a43de5fd.slice/crio-conmon-3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa16f29_f14b_4ce4_bf46_2660a43de5fd.slice/crio-3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec.scope\": RecentStats: unable to find data in memory cache]" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.315380 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kzbmj_must-gather-j9jb7_eaa16f29-f14b-4ce4-bf46-2660a43de5fd/copy/0.log" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.316385 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kzbmj/must-gather-j9jb7" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.348932 4895 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kzbmj_must-gather-j9jb7_eaa16f29-f14b-4ce4-bf46-2660a43de5fd/copy/0.log" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.349666 4895 generic.go:334] "Generic (PLEG): container finished" podID="eaa16f29-f14b-4ce4-bf46-2660a43de5fd" containerID="3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec" exitCode=143 Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.350010 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kzbmj/must-gather-j9jb7" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.350032 4895 scope.go:117] "RemoveContainer" containerID="3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.374142 4895 scope.go:117] "RemoveContainer" containerID="c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.451635 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64nf4\" (UniqueName: \"kubernetes.io/projected/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-kube-api-access-64nf4\") pod \"eaa16f29-f14b-4ce4-bf46-2660a43de5fd\" (UID: \"eaa16f29-f14b-4ce4-bf46-2660a43de5fd\") " Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.451741 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-must-gather-output\") pod \"eaa16f29-f14b-4ce4-bf46-2660a43de5fd\" (UID: \"eaa16f29-f14b-4ce4-bf46-2660a43de5fd\") " Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.458093 4895 scope.go:117] "RemoveContainer" containerID="3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec" Jan 29 17:45:53 crc kubenswrapper[4895]: E0129 17:45:53.458656 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec\": container with ID starting with 3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec not found: ID does not exist" containerID="3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.458731 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec"} err="failed to get container status \"3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec\": rpc error: code = NotFound desc = could not find container \"3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec\": container with ID starting with 3d258e3e430cf60b62de8fee63c6f32ff7771d0c5a28abc917002e94148f17ec not found: ID does not exist" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.458779 4895 scope.go:117] "RemoveContainer" containerID="c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191" Jan 29 17:45:53 crc kubenswrapper[4895]: E0129 17:45:53.459790 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191\": container with ID starting with c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191 not found: ID does not exist" containerID="c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.459843 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191"} err="failed to get container status \"c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191\": rpc error: code = NotFound desc = could not find container \"c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191\": container with ID starting with c97081d963b25111c4f162fda7cc47adee22c973d60e100b168865a8b0774191 not found: ID does not exist" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.460223 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-kube-api-access-64nf4" (OuterVolumeSpecName: "kube-api-access-64nf4") pod "eaa16f29-f14b-4ce4-bf46-2660a43de5fd" (UID: "eaa16f29-f14b-4ce4-bf46-2660a43de5fd"). InnerVolumeSpecName "kube-api-access-64nf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.554974 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64nf4\" (UniqueName: \"kubernetes.io/projected/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-kube-api-access-64nf4\") on node \"crc\" DevicePath \"\"" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.654403 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "eaa16f29-f14b-4ce4-bf46-2660a43de5fd" (UID: "eaa16f29-f14b-4ce4-bf46-2660a43de5fd"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:45:53 crc kubenswrapper[4895]: I0129 17:45:53.658193 4895 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eaa16f29-f14b-4ce4-bf46-2660a43de5fd-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 17:45:55 crc kubenswrapper[4895]: I0129 17:45:55.058074 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa16f29-f14b-4ce4-bf46-2660a43de5fd" path="/var/lib/kubelet/pods/eaa16f29-f14b-4ce4-bf46-2660a43de5fd/volumes" Jan 29 17:46:00 crc kubenswrapper[4895]: E0129 17:46:00.039356 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:46:15 crc kubenswrapper[4895]: E0129 17:46:15.042729 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:46:30 crc kubenswrapper[4895]: E0129 17:46:30.055527 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:46:45 crc kubenswrapper[4895]: I0129 17:46:45.040076 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:46:45 crc kubenswrapper[4895]: E0129 17:46:45.161911 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:46:45 crc kubenswrapper[4895]: E0129 17:46:45.162306 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm5mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vzzds_openshift-marketplace(4c4f588f-f4f6-4000-9919-8461a6af64a3): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:46:45 crc kubenswrapper[4895]: E0129 17:46:45.163481 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.193695 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9c98x"] Jan 29 17:46:48 crc kubenswrapper[4895]: E0129 17:46:48.194528 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa16f29-f14b-4ce4-bf46-2660a43de5fd" containerName="copy" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.194546 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa16f29-f14b-4ce4-bf46-2660a43de5fd" containerName="copy" Jan 29 17:46:48 crc kubenswrapper[4895]: E0129 17:46:48.194564 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4754497-5986-48ab-a08c-873671d653c0" containerName="collect-profiles" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.194572 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4754497-5986-48ab-a08c-873671d653c0" containerName="collect-profiles" Jan 29 17:46:48 crc kubenswrapper[4895]: E0129 17:46:48.194599 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa16f29-f14b-4ce4-bf46-2660a43de5fd" containerName="gather" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.194607 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa16f29-f14b-4ce4-bf46-2660a43de5fd" containerName="gather" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.197363 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa16f29-f14b-4ce4-bf46-2660a43de5fd" containerName="gather" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.197407 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa16f29-f14b-4ce4-bf46-2660a43de5fd" containerName="copy" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.197421 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4754497-5986-48ab-a08c-873671d653c0" containerName="collect-profiles" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.199215 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.212139 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9c98x"] Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.328603 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-utilities\") pod \"redhat-operators-9c98x\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.328668 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7ddv\" (UniqueName: \"kubernetes.io/projected/4381d30d-29df-481a-8848-026bff68990f-kube-api-access-q7ddv\") pod \"redhat-operators-9c98x\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.329256 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-catalog-content\") pod \"redhat-operators-9c98x\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.432175 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7ddv\" (UniqueName: \"kubernetes.io/projected/4381d30d-29df-481a-8848-026bff68990f-kube-api-access-q7ddv\") pod \"redhat-operators-9c98x\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.432340 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-catalog-content\") pod \"redhat-operators-9c98x\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.432400 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-utilities\") pod \"redhat-operators-9c98x\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.432951 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-utilities\") pod \"redhat-operators-9c98x\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.432988 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-catalog-content\") pod \"redhat-operators-9c98x\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.452680 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7ddv\" (UniqueName: \"kubernetes.io/projected/4381d30d-29df-481a-8848-026bff68990f-kube-api-access-q7ddv\") pod \"redhat-operators-9c98x\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:46:48 crc kubenswrapper[4895]: I0129 17:46:48.534655 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:46:49 crc kubenswrapper[4895]: I0129 17:46:49.002957 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9c98x"] Jan 29 17:46:49 crc kubenswrapper[4895]: I0129 17:46:49.117570 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c98x" event={"ID":"4381d30d-29df-481a-8848-026bff68990f","Type":"ContainerStarted","Data":"a5f52e9c3be80c598a73714a9d077d95eef61fac4906acc590e4de4bfc8ed0f7"} Jan 29 17:46:50 crc kubenswrapper[4895]: I0129 17:46:50.128079 4895 generic.go:334] "Generic (PLEG): container finished" podID="4381d30d-29df-481a-8848-026bff68990f" containerID="e00a52f6abe04d10270d9db4232ecd6400b67d86ffc5b3ebaa4e4b9ac1012fdf" exitCode=0 Jan 29 17:46:50 crc kubenswrapper[4895]: I0129 17:46:50.128141 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c98x" event={"ID":"4381d30d-29df-481a-8848-026bff68990f","Type":"ContainerDied","Data":"e00a52f6abe04d10270d9db4232ecd6400b67d86ffc5b3ebaa4e4b9ac1012fdf"} Jan 29 17:46:50 crc kubenswrapper[4895]: E0129 17:46:50.254141 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:46:50 crc kubenswrapper[4895]: E0129 17:46:50.254286 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7ddv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9c98x_openshift-marketplace(4381d30d-29df-481a-8848-026bff68990f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:46:50 crc kubenswrapper[4895]: E0129 17:46:50.255470 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:46:51 crc kubenswrapper[4895]: E0129 17:46:51.139333 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:46:59 crc kubenswrapper[4895]: E0129 17:46:59.044060 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:47:04 crc kubenswrapper[4895]: E0129 17:47:04.174399 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:47:04 crc kubenswrapper[4895]: E0129 17:47:04.174999 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7ddv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9c98x_openshift-marketplace(4381d30d-29df-481a-8848-026bff68990f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:47:04 crc kubenswrapper[4895]: E0129 17:47:04.176280 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:47:10 crc kubenswrapper[4895]: E0129 17:47:10.039009 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:47:16 crc kubenswrapper[4895]: E0129 17:47:16.040328 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:47:21 crc kubenswrapper[4895]: E0129 17:47:21.038969 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:47:27 crc kubenswrapper[4895]: E0129 17:47:27.183529 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:47:27 crc kubenswrapper[4895]: E0129 17:47:27.184291 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7ddv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9c98x_openshift-marketplace(4381d30d-29df-481a-8848-026bff68990f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:47:27 crc kubenswrapper[4895]: E0129 17:47:27.185536 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:47:27 crc kubenswrapper[4895]: I0129 17:47:27.823851 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:47:27 crc kubenswrapper[4895]: I0129 17:47:27.823969 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:47:33 crc kubenswrapper[4895]: E0129 17:47:33.039566 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:47:40 crc kubenswrapper[4895]: E0129 17:47:40.040113 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:47:46 crc kubenswrapper[4895]: E0129 17:47:46.040298 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:47:53 crc kubenswrapper[4895]: E0129 17:47:53.042951 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:47:57 crc kubenswrapper[4895]: I0129 17:47:57.823493 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:47:57 crc kubenswrapper[4895]: I0129 17:47:57.824188 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:48:00 crc kubenswrapper[4895]: E0129 17:48:00.039706 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:48:04 crc kubenswrapper[4895]: E0129 17:48:04.040973 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:48:14 crc kubenswrapper[4895]: E0129 17:48:14.041063 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:48:16 crc kubenswrapper[4895]: E0129 17:48:16.176534 4895 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 17:48:16 crc kubenswrapper[4895]: E0129 17:48:16.177137 4895 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7ddv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9c98x_openshift-marketplace(4381d30d-29df-481a-8848-026bff68990f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:48:16 crc kubenswrapper[4895]: E0129 17:48:16.178397 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:48:27 crc kubenswrapper[4895]: E0129 17:48:27.047754 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:48:27 crc kubenswrapper[4895]: I0129 17:48:27.823421 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:48:27 crc kubenswrapper[4895]: I0129 17:48:27.824036 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:48:27 crc kubenswrapper[4895]: I0129 17:48:27.824121 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 17:48:27 crc kubenswrapper[4895]: I0129 17:48:27.825568 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9abf8269a8b6962441637fc80f7fa0841b4aff7d61853ac695970a0d5db60dc1"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:48:27 crc kubenswrapper[4895]: I0129 17:48:27.825713 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://9abf8269a8b6962441637fc80f7fa0841b4aff7d61853ac695970a0d5db60dc1" gracePeriod=600 Jan 29 17:48:28 crc kubenswrapper[4895]: E0129 17:48:28.044296 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:48:28 crc kubenswrapper[4895]: I0129 17:48:28.163942 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="9abf8269a8b6962441637fc80f7fa0841b4aff7d61853ac695970a0d5db60dc1" exitCode=0 Jan 29 17:48:28 crc kubenswrapper[4895]: I0129 17:48:28.164025 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"9abf8269a8b6962441637fc80f7fa0841b4aff7d61853ac695970a0d5db60dc1"} Jan 29 17:48:28 crc kubenswrapper[4895]: I0129 17:48:28.164272 4895 scope.go:117] "RemoveContainer" containerID="6eb7e26abcf57f65563467ecb1cb0ec49263123f7ca25006976d705e06c7693b" Jan 29 17:48:29 crc kubenswrapper[4895]: I0129 17:48:29.179513 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerStarted","Data":"7f15e55fbff1e227d8f24e31334ff7e004b8125a9636dc4c49af28547f3679cf"} Jan 29 17:48:42 crc kubenswrapper[4895]: E0129 17:48:42.040828 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:48:43 crc kubenswrapper[4895]: E0129 17:48:43.041496 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:48:54 crc kubenswrapper[4895]: E0129 17:48:54.041840 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:48:56 crc kubenswrapper[4895]: E0129 17:48:56.040426 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:49:07 crc kubenswrapper[4895]: E0129 17:49:07.048599 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:49:10 crc kubenswrapper[4895]: E0129 17:49:10.039962 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:49:19 crc kubenswrapper[4895]: E0129 17:49:19.040222 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:49:21 crc kubenswrapper[4895]: E0129 17:49:21.040430 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:49:33 crc kubenswrapper[4895]: E0129 17:49:33.039989 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" Jan 29 17:49:34 crc kubenswrapper[4895]: E0129 17:49:34.039950 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:49:46 crc kubenswrapper[4895]: I0129 17:49:46.019373 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c98x" event={"ID":"4381d30d-29df-481a-8848-026bff68990f","Type":"ContainerStarted","Data":"afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe"} Jan 29 17:49:48 crc kubenswrapper[4895]: I0129 17:49:48.042530 4895 generic.go:334] "Generic (PLEG): container finished" podID="4381d30d-29df-481a-8848-026bff68990f" containerID="afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe" exitCode=0 Jan 29 17:49:48 crc kubenswrapper[4895]: I0129 17:49:48.042710 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c98x" event={"ID":"4381d30d-29df-481a-8848-026bff68990f","Type":"ContainerDied","Data":"afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe"} Jan 29 17:49:49 crc kubenswrapper[4895]: E0129 17:49:49.044003 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:49:49 crc kubenswrapper[4895]: I0129 17:49:49.075125 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c98x" event={"ID":"4381d30d-29df-481a-8848-026bff68990f","Type":"ContainerStarted","Data":"8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a"} Jan 29 17:49:49 crc kubenswrapper[4895]: I0129 17:49:49.104751 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9c98x" podStartSLOduration=2.704851454 podStartE2EDuration="3m1.104730998s" podCreationTimestamp="2026-01-29 17:46:48 +0000 UTC" firstStartedPulling="2026-01-29 17:46:50.130812454 +0000 UTC m=+5693.933789758" lastFinishedPulling="2026-01-29 17:49:48.530692028 +0000 UTC m=+5872.333669302" observedRunningTime="2026-01-29 17:49:49.094306625 +0000 UTC m=+5872.897283889" watchObservedRunningTime="2026-01-29 17:49:49.104730998 +0000 UTC m=+5872.907708272" Jan 29 17:49:58 crc kubenswrapper[4895]: I0129 17:49:58.534740 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:49:58 crc kubenswrapper[4895]: I0129 17:49:58.535202 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:49:58 crc kubenswrapper[4895]: I0129 17:49:58.583819 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:49:59 crc kubenswrapper[4895]: I0129 17:49:59.216270 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:49:59 crc kubenswrapper[4895]: I0129 17:49:59.265627 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9c98x"] Jan 29 17:50:01 crc kubenswrapper[4895]: I0129 17:50:01.165146 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9c98x" podUID="4381d30d-29df-481a-8848-026bff68990f" containerName="registry-server" containerID="cri-o://8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a" gracePeriod=2 Jan 29 17:50:01 crc kubenswrapper[4895]: I0129 17:50:01.676061 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:50:01 crc kubenswrapper[4895]: I0129 17:50:01.839328 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7ddv\" (UniqueName: \"kubernetes.io/projected/4381d30d-29df-481a-8848-026bff68990f-kube-api-access-q7ddv\") pod \"4381d30d-29df-481a-8848-026bff68990f\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " Jan 29 17:50:01 crc kubenswrapper[4895]: I0129 17:50:01.839407 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-utilities\") pod \"4381d30d-29df-481a-8848-026bff68990f\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " Jan 29 17:50:01 crc kubenswrapper[4895]: I0129 17:50:01.839504 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-catalog-content\") pod \"4381d30d-29df-481a-8848-026bff68990f\" (UID: \"4381d30d-29df-481a-8848-026bff68990f\") " Jan 29 17:50:01 crc kubenswrapper[4895]: I0129 17:50:01.841192 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-utilities" (OuterVolumeSpecName: "utilities") pod "4381d30d-29df-481a-8848-026bff68990f" (UID: "4381d30d-29df-481a-8848-026bff68990f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:50:01 crc kubenswrapper[4895]: I0129 17:50:01.849120 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4381d30d-29df-481a-8848-026bff68990f-kube-api-access-q7ddv" (OuterVolumeSpecName: "kube-api-access-q7ddv") pod "4381d30d-29df-481a-8848-026bff68990f" (UID: "4381d30d-29df-481a-8848-026bff68990f"). InnerVolumeSpecName "kube-api-access-q7ddv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:50:01 crc kubenswrapper[4895]: I0129 17:50:01.941806 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7ddv\" (UniqueName: \"kubernetes.io/projected/4381d30d-29df-481a-8848-026bff68990f-kube-api-access-q7ddv\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:01 crc kubenswrapper[4895]: I0129 17:50:01.941841 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:01 crc kubenswrapper[4895]: I0129 17:50:01.953127 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4381d30d-29df-481a-8848-026bff68990f" (UID: "4381d30d-29df-481a-8848-026bff68990f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.042941 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4381d30d-29df-481a-8848-026bff68990f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.175622 4895 generic.go:334] "Generic (PLEG): container finished" podID="4381d30d-29df-481a-8848-026bff68990f" containerID="8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a" exitCode=0 Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.175670 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c98x" event={"ID":"4381d30d-29df-481a-8848-026bff68990f","Type":"ContainerDied","Data":"8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a"} Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.175779 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9c98x" event={"ID":"4381d30d-29df-481a-8848-026bff68990f","Type":"ContainerDied","Data":"a5f52e9c3be80c598a73714a9d077d95eef61fac4906acc590e4de4bfc8ed0f7"} Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.175809 4895 scope.go:117] "RemoveContainer" containerID="8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a" Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.177048 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9c98x" Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.197459 4895 scope.go:117] "RemoveContainer" containerID="afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe" Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.222947 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9c98x"] Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.233022 4895 scope.go:117] "RemoveContainer" containerID="e00a52f6abe04d10270d9db4232ecd6400b67d86ffc5b3ebaa4e4b9ac1012fdf" Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.236839 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9c98x"] Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.278396 4895 scope.go:117] "RemoveContainer" containerID="8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a" Jan 29 17:50:02 crc kubenswrapper[4895]: E0129 17:50:02.278834 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a\": container with ID starting with 8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a not found: ID does not exist" containerID="8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a" Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.278884 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a"} err="failed to get container status \"8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a\": rpc error: code = NotFound desc = could not find container \"8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a\": container with ID starting with 8743f538652f2b3f80a94fabde7f255a85937dc3ef53dd658f7aefb00e56214a not found: ID does not exist" Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.278911 4895 scope.go:117] "RemoveContainer" containerID="afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe" Jan 29 17:50:02 crc kubenswrapper[4895]: E0129 17:50:02.279222 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe\": container with ID starting with afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe not found: ID does not exist" containerID="afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe" Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.279276 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe"} err="failed to get container status \"afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe\": rpc error: code = NotFound desc = could not find container \"afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe\": container with ID starting with afb3758c22d4caf2640e6d60aa03a11a25eac2f97acb2ab3e0db9eb3ff00d9fe not found: ID does not exist" Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.279311 4895 scope.go:117] "RemoveContainer" containerID="e00a52f6abe04d10270d9db4232ecd6400b67d86ffc5b3ebaa4e4b9ac1012fdf" Jan 29 17:50:02 crc kubenswrapper[4895]: E0129 17:50:02.279621 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00a52f6abe04d10270d9db4232ecd6400b67d86ffc5b3ebaa4e4b9ac1012fdf\": container with ID starting with e00a52f6abe04d10270d9db4232ecd6400b67d86ffc5b3ebaa4e4b9ac1012fdf not found: ID does not exist" containerID="e00a52f6abe04d10270d9db4232ecd6400b67d86ffc5b3ebaa4e4b9ac1012fdf" Jan 29 17:50:02 crc kubenswrapper[4895]: I0129 17:50:02.279679 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00a52f6abe04d10270d9db4232ecd6400b67d86ffc5b3ebaa4e4b9ac1012fdf"} err="failed to get container status \"e00a52f6abe04d10270d9db4232ecd6400b67d86ffc5b3ebaa4e4b9ac1012fdf\": rpc error: code = NotFound desc = could not find container \"e00a52f6abe04d10270d9db4232ecd6400b67d86ffc5b3ebaa4e4b9ac1012fdf\": container with ID starting with e00a52f6abe04d10270d9db4232ecd6400b67d86ffc5b3ebaa4e4b9ac1012fdf not found: ID does not exist" Jan 29 17:50:03 crc kubenswrapper[4895]: E0129 17:50:03.038858 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:50:03 crc kubenswrapper[4895]: I0129 17:50:03.056176 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4381d30d-29df-481a-8848-026bff68990f" path="/var/lib/kubelet/pods/4381d30d-29df-481a-8848-026bff68990f/volumes" Jan 29 17:50:18 crc kubenswrapper[4895]: E0129 17:50:18.040678 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:50:30 crc kubenswrapper[4895]: E0129 17:50:30.038773 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:50:45 crc kubenswrapper[4895]: E0129 17:50:45.038933 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:50:57 crc kubenswrapper[4895]: I0129 17:50:57.824061 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:50:57 crc kubenswrapper[4895]: I0129 17:50:57.824968 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:50:58 crc kubenswrapper[4895]: E0129 17:50:58.039292 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:51:13 crc kubenswrapper[4895]: E0129 17:51:13.044398 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:51:27 crc kubenswrapper[4895]: I0129 17:51:27.823418 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:51:27 crc kubenswrapper[4895]: I0129 17:51:27.824040 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:51:28 crc kubenswrapper[4895]: E0129 17:51:28.040210 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:51:42 crc kubenswrapper[4895]: E0129 17:51:42.039135 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" Jan 29 17:51:56 crc kubenswrapper[4895]: I0129 17:51:56.040841 4895 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:51:57 crc kubenswrapper[4895]: I0129 17:51:57.350830 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzzds" event={"ID":"4c4f588f-f4f6-4000-9919-8461a6af64a3","Type":"ContainerStarted","Data":"5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9"} Jan 29 17:51:57 crc kubenswrapper[4895]: I0129 17:51:57.823546 4895 patch_prober.go:28] interesting pod/machine-config-daemon-qh8vw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:51:57 crc kubenswrapper[4895]: I0129 17:51:57.824269 4895 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:51:57 crc kubenswrapper[4895]: I0129 17:51:57.824481 4895 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" Jan 29 17:51:57 crc kubenswrapper[4895]: I0129 17:51:57.825489 4895 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f15e55fbff1e227d8f24e31334ff7e004b8125a9636dc4c49af28547f3679cf"} pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:51:57 crc kubenswrapper[4895]: I0129 17:51:57.825766 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerName="machine-config-daemon" containerID="cri-o://7f15e55fbff1e227d8f24e31334ff7e004b8125a9636dc4c49af28547f3679cf" gracePeriod=600 Jan 29 17:51:57 crc kubenswrapper[4895]: E0129 17:51:57.999769 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:51:58 crc kubenswrapper[4895]: I0129 17:51:58.367525 4895 generic.go:334] "Generic (PLEG): container finished" podID="9af81de5-cf3e-4437-b9c1-32ef1495f362" containerID="7f15e55fbff1e227d8f24e31334ff7e004b8125a9636dc4c49af28547f3679cf" exitCode=0 Jan 29 17:51:58 crc kubenswrapper[4895]: I0129 17:51:58.367619 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" event={"ID":"9af81de5-cf3e-4437-b9c1-32ef1495f362","Type":"ContainerDied","Data":"7f15e55fbff1e227d8f24e31334ff7e004b8125a9636dc4c49af28547f3679cf"} Jan 29 17:51:58 crc kubenswrapper[4895]: I0129 17:51:58.367663 4895 scope.go:117] "RemoveContainer" containerID="9abf8269a8b6962441637fc80f7fa0841b4aff7d61853ac695970a0d5db60dc1" Jan 29 17:51:58 crc kubenswrapper[4895]: I0129 17:51:58.369084 4895 scope.go:117] "RemoveContainer" containerID="7f15e55fbff1e227d8f24e31334ff7e004b8125a9636dc4c49af28547f3679cf" Jan 29 17:51:58 crc kubenswrapper[4895]: E0129 17:51:58.369738 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:51:58 crc kubenswrapper[4895]: I0129 17:51:58.371484 4895 generic.go:334] "Generic (PLEG): container finished" podID="4c4f588f-f4f6-4000-9919-8461a6af64a3" containerID="5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9" exitCode=0 Jan 29 17:51:58 crc kubenswrapper[4895]: I0129 17:51:58.371512 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzzds" event={"ID":"4c4f588f-f4f6-4000-9919-8461a6af64a3","Type":"ContainerDied","Data":"5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9"} Jan 29 17:52:00 crc kubenswrapper[4895]: I0129 17:52:00.398198 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzzds" event={"ID":"4c4f588f-f4f6-4000-9919-8461a6af64a3","Type":"ContainerStarted","Data":"4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9"} Jan 29 17:52:00 crc kubenswrapper[4895]: I0129 17:52:00.422848 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vzzds" podStartSLOduration=2.220338914 podStartE2EDuration="10m55.422830855s" podCreationTimestamp="2026-01-29 17:41:05 +0000 UTC" firstStartedPulling="2026-01-29 17:41:06.600544674 +0000 UTC m=+5350.403521968" lastFinishedPulling="2026-01-29 17:51:59.803036645 +0000 UTC m=+6003.606013909" observedRunningTime="2026-01-29 17:52:00.419403802 +0000 UTC m=+6004.222381076" watchObservedRunningTime="2026-01-29 17:52:00.422830855 +0000 UTC m=+6004.225808119" Jan 29 17:52:05 crc kubenswrapper[4895]: I0129 17:52:05.483320 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:52:05 crc kubenswrapper[4895]: I0129 17:52:05.484002 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:52:05 crc kubenswrapper[4895]: I0129 17:52:05.558243 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:52:06 crc kubenswrapper[4895]: I0129 17:52:06.533786 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:52:06 crc kubenswrapper[4895]: I0129 17:52:06.589058 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vzzds"] Jan 29 17:52:08 crc kubenswrapper[4895]: I0129 17:52:08.476459 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vzzds" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" containerName="registry-server" containerID="cri-o://4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9" gracePeriod=2 Jan 29 17:52:08 crc kubenswrapper[4895]: I0129 17:52:08.999071 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.125587 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-catalog-content\") pod \"4c4f588f-f4f6-4000-9919-8461a6af64a3\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.125650 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm5mf\" (UniqueName: \"kubernetes.io/projected/4c4f588f-f4f6-4000-9919-8461a6af64a3-kube-api-access-qm5mf\") pod \"4c4f588f-f4f6-4000-9919-8461a6af64a3\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.125729 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-utilities\") pod \"4c4f588f-f4f6-4000-9919-8461a6af64a3\" (UID: \"4c4f588f-f4f6-4000-9919-8461a6af64a3\") " Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.127244 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-utilities" (OuterVolumeSpecName: "utilities") pod "4c4f588f-f4f6-4000-9919-8461a6af64a3" (UID: "4c4f588f-f4f6-4000-9919-8461a6af64a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.132538 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c4f588f-f4f6-4000-9919-8461a6af64a3-kube-api-access-qm5mf" (OuterVolumeSpecName: "kube-api-access-qm5mf") pod "4c4f588f-f4f6-4000-9919-8461a6af64a3" (UID: "4c4f588f-f4f6-4000-9919-8461a6af64a3"). InnerVolumeSpecName "kube-api-access-qm5mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.190972 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c4f588f-f4f6-4000-9919-8461a6af64a3" (UID: "4c4f588f-f4f6-4000-9919-8461a6af64a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.228471 4895 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.228522 4895 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4f588f-f4f6-4000-9919-8461a6af64a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.228542 4895 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm5mf\" (UniqueName: \"kubernetes.io/projected/4c4f588f-f4f6-4000-9919-8461a6af64a3-kube-api-access-qm5mf\") on node \"crc\" DevicePath \"\"" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.488447 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzzds" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.488489 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzzds" event={"ID":"4c4f588f-f4f6-4000-9919-8461a6af64a3","Type":"ContainerDied","Data":"4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9"} Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.488568 4895 scope.go:117] "RemoveContainer" containerID="4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.488447 4895 generic.go:334] "Generic (PLEG): container finished" podID="4c4f588f-f4f6-4000-9919-8461a6af64a3" containerID="4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9" exitCode=0 Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.488666 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzzds" event={"ID":"4c4f588f-f4f6-4000-9919-8461a6af64a3","Type":"ContainerDied","Data":"c1089ae412b9d709af5872ad316adb99df27d095bf8738b13301a8e804435905"} Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.518778 4895 scope.go:117] "RemoveContainer" containerID="5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.553267 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vzzds"] Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.558101 4895 scope.go:117] "RemoveContainer" containerID="86beb07b61a85f6fbd4d24a6e019b0ac73a59e7b477e6aeb7bfe10fc57ba0ae7" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.572586 4895 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vzzds"] Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.605292 4895 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8flm2"] Jan 29 17:52:09 crc kubenswrapper[4895]: E0129 17:52:09.606299 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" containerName="extract-content" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.606328 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" containerName="extract-content" Jan 29 17:52:09 crc kubenswrapper[4895]: E0129 17:52:09.606367 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" containerName="extract-utilities" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.606377 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" containerName="extract-utilities" Jan 29 17:52:09 crc kubenswrapper[4895]: E0129 17:52:09.606390 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4381d30d-29df-481a-8848-026bff68990f" containerName="registry-server" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.606398 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4381d30d-29df-481a-8848-026bff68990f" containerName="registry-server" Jan 29 17:52:09 crc kubenswrapper[4895]: E0129 17:52:09.606411 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" containerName="registry-server" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.606418 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" containerName="registry-server" Jan 29 17:52:09 crc kubenswrapper[4895]: E0129 17:52:09.606445 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4381d30d-29df-481a-8848-026bff68990f" containerName="extract-content" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.606452 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4381d30d-29df-481a-8848-026bff68990f" containerName="extract-content" Jan 29 17:52:09 crc kubenswrapper[4895]: E0129 17:52:09.606466 4895 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4381d30d-29df-481a-8848-026bff68990f" containerName="extract-utilities" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.606474 4895 state_mem.go:107] "Deleted CPUSet assignment" podUID="4381d30d-29df-481a-8848-026bff68990f" containerName="extract-utilities" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.606717 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4381d30d-29df-481a-8848-026bff68990f" containerName="registry-server" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.606733 4895 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" containerName="registry-server" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.608560 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.631065 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8flm2"] Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.638944 4895 scope.go:117] "RemoveContainer" containerID="4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9" Jan 29 17:52:09 crc kubenswrapper[4895]: E0129 17:52:09.642366 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9\": container with ID starting with 4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9 not found: ID does not exist" containerID="4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.642412 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9"} err="failed to get container status \"4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9\": rpc error: code = NotFound desc = could not find container \"4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9\": container with ID starting with 4821f2f8041b888319bfb1749c769a87f18c34b99c5672dbc104809c83178ec9 not found: ID does not exist" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.642438 4895 scope.go:117] "RemoveContainer" containerID="5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9" Jan 29 17:52:09 crc kubenswrapper[4895]: E0129 17:52:09.642974 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9\": container with ID starting with 5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9 not found: ID does not exist" containerID="5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.643022 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9"} err="failed to get container status \"5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9\": rpc error: code = NotFound desc = could not find container \"5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9\": container with ID starting with 5f3e4a6a13c5184d694ebdce44787b2d01a1f4a4c8acbba1d1868a1bbf24ace9 not found: ID does not exist" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.643053 4895 scope.go:117] "RemoveContainer" containerID="86beb07b61a85f6fbd4d24a6e019b0ac73a59e7b477e6aeb7bfe10fc57ba0ae7" Jan 29 17:52:09 crc kubenswrapper[4895]: E0129 17:52:09.643375 4895 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86beb07b61a85f6fbd4d24a6e019b0ac73a59e7b477e6aeb7bfe10fc57ba0ae7\": container with ID starting with 86beb07b61a85f6fbd4d24a6e019b0ac73a59e7b477e6aeb7bfe10fc57ba0ae7 not found: ID does not exist" containerID="86beb07b61a85f6fbd4d24a6e019b0ac73a59e7b477e6aeb7bfe10fc57ba0ae7" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.643403 4895 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86beb07b61a85f6fbd4d24a6e019b0ac73a59e7b477e6aeb7bfe10fc57ba0ae7"} err="failed to get container status \"86beb07b61a85f6fbd4d24a6e019b0ac73a59e7b477e6aeb7bfe10fc57ba0ae7\": rpc error: code = NotFound desc = could not find container \"86beb07b61a85f6fbd4d24a6e019b0ac73a59e7b477e6aeb7bfe10fc57ba0ae7\": container with ID starting with 86beb07b61a85f6fbd4d24a6e019b0ac73a59e7b477e6aeb7bfe10fc57ba0ae7 not found: ID does not exist" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.740223 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2b28f7-95a1-42c7-957f-4ff551fa8617-catalog-content\") pod \"community-operators-8flm2\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.740308 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2b28f7-95a1-42c7-957f-4ff551fa8617-utilities\") pod \"community-operators-8flm2\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.740840 4895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkcbh\" (UniqueName: \"kubernetes.io/projected/2c2b28f7-95a1-42c7-957f-4ff551fa8617-kube-api-access-rkcbh\") pod \"community-operators-8flm2\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.842723 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkcbh\" (UniqueName: \"kubernetes.io/projected/2c2b28f7-95a1-42c7-957f-4ff551fa8617-kube-api-access-rkcbh\") pod \"community-operators-8flm2\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.842807 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2b28f7-95a1-42c7-957f-4ff551fa8617-catalog-content\") pod \"community-operators-8flm2\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.842849 4895 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2b28f7-95a1-42c7-957f-4ff551fa8617-utilities\") pod \"community-operators-8flm2\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.843381 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2b28f7-95a1-42c7-957f-4ff551fa8617-catalog-content\") pod \"community-operators-8flm2\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.843408 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2b28f7-95a1-42c7-957f-4ff551fa8617-utilities\") pod \"community-operators-8flm2\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.863151 4895 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkcbh\" (UniqueName: \"kubernetes.io/projected/2c2b28f7-95a1-42c7-957f-4ff551fa8617-kube-api-access-rkcbh\") pod \"community-operators-8flm2\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:09 crc kubenswrapper[4895]: I0129 17:52:09.968959 4895 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:10 crc kubenswrapper[4895]: I0129 17:52:10.447852 4895 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8flm2"] Jan 29 17:52:10 crc kubenswrapper[4895]: I0129 17:52:10.499293 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8flm2" event={"ID":"2c2b28f7-95a1-42c7-957f-4ff551fa8617","Type":"ContainerStarted","Data":"9426fcb562104bc381f821e4366d010f57c45019c577b38ae9ece808c1f2d134"} Jan 29 17:52:11 crc kubenswrapper[4895]: I0129 17:52:11.057414 4895 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c4f588f-f4f6-4000-9919-8461a6af64a3" path="/var/lib/kubelet/pods/4c4f588f-f4f6-4000-9919-8461a6af64a3/volumes" Jan 29 17:52:11 crc kubenswrapper[4895]: I0129 17:52:11.514359 4895 generic.go:334] "Generic (PLEG): container finished" podID="2c2b28f7-95a1-42c7-957f-4ff551fa8617" containerID="9bc38ccafa260843e97383827f40cfb4e9be32a271abc525bf4abc5f35d53f93" exitCode=0 Jan 29 17:52:11 crc kubenswrapper[4895]: I0129 17:52:11.514404 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8flm2" event={"ID":"2c2b28f7-95a1-42c7-957f-4ff551fa8617","Type":"ContainerDied","Data":"9bc38ccafa260843e97383827f40cfb4e9be32a271abc525bf4abc5f35d53f93"} Jan 29 17:52:12 crc kubenswrapper[4895]: I0129 17:52:12.037514 4895 scope.go:117] "RemoveContainer" containerID="7f15e55fbff1e227d8f24e31334ff7e004b8125a9636dc4c49af28547f3679cf" Jan 29 17:52:12 crc kubenswrapper[4895]: E0129 17:52:12.038663 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:52:12 crc kubenswrapper[4895]: I0129 17:52:12.531094 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8flm2" event={"ID":"2c2b28f7-95a1-42c7-957f-4ff551fa8617","Type":"ContainerStarted","Data":"3ad1a961d1bea96f9d0bba26819323633c93534647cf19aa56cf01c520d980ad"} Jan 29 17:52:13 crc kubenswrapper[4895]: I0129 17:52:13.563466 4895 generic.go:334] "Generic (PLEG): container finished" podID="2c2b28f7-95a1-42c7-957f-4ff551fa8617" containerID="3ad1a961d1bea96f9d0bba26819323633c93534647cf19aa56cf01c520d980ad" exitCode=0 Jan 29 17:52:13 crc kubenswrapper[4895]: I0129 17:52:13.563537 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8flm2" event={"ID":"2c2b28f7-95a1-42c7-957f-4ff551fa8617","Type":"ContainerDied","Data":"3ad1a961d1bea96f9d0bba26819323633c93534647cf19aa56cf01c520d980ad"} Jan 29 17:52:14 crc kubenswrapper[4895]: I0129 17:52:14.574912 4895 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8flm2" event={"ID":"2c2b28f7-95a1-42c7-957f-4ff551fa8617","Type":"ContainerStarted","Data":"7ff4854e118c102dc0e1fd80cc50bca563e2d5ffccfe2407649700a469941462"} Jan 29 17:52:14 crc kubenswrapper[4895]: I0129 17:52:14.614971 4895 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8flm2" podStartSLOduration=3.058249006 podStartE2EDuration="5.614948449s" podCreationTimestamp="2026-01-29 17:52:09 +0000 UTC" firstStartedPulling="2026-01-29 17:52:11.517963096 +0000 UTC m=+6015.320940410" lastFinishedPulling="2026-01-29 17:52:14.074662579 +0000 UTC m=+6017.877639853" observedRunningTime="2026-01-29 17:52:14.600343573 +0000 UTC m=+6018.403320927" watchObservedRunningTime="2026-01-29 17:52:14.614948449 +0000 UTC m=+6018.417925723" Jan 29 17:52:19 crc kubenswrapper[4895]: I0129 17:52:19.969120 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:19 crc kubenswrapper[4895]: I0129 17:52:19.969686 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:20 crc kubenswrapper[4895]: I0129 17:52:20.026767 4895 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:20 crc kubenswrapper[4895]: I0129 17:52:20.695415 4895 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:20 crc kubenswrapper[4895]: I0129 17:52:20.760314 4895 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8flm2"] Jan 29 17:52:22 crc kubenswrapper[4895]: I0129 17:52:22.653017 4895 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8flm2" podUID="2c2b28f7-95a1-42c7-957f-4ff551fa8617" containerName="registry-server" containerID="cri-o://7ff4854e118c102dc0e1fd80cc50bca563e2d5ffccfe2407649700a469941462" gracePeriod=2 Jan 29 17:52:22 crc kubenswrapper[4895]: E0129 17:52:22.790105 4895 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c2b28f7_95a1_42c7_957f_4ff551fa8617.slice/crio-7ff4854e118c102dc0e1fd80cc50bca563e2d5ffccfe2407649700a469941462.scope\": RecentStats: unable to find data in memory cache]" Jan 29 17:52:23 crc kubenswrapper[4895]: I0129 17:52:23.037480 4895 scope.go:117] "RemoveContainer" containerID="7f15e55fbff1e227d8f24e31334ff7e004b8125a9636dc4c49af28547f3679cf" Jan 29 17:52:23 crc kubenswrapper[4895]: E0129 17:52:23.037956 4895 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qh8vw_openshift-machine-config-operator(9af81de5-cf3e-4437-b9c1-32ef1495f362)\"" pod="openshift-machine-config-operator/machine-config-daemon-qh8vw" podUID="9af81de5-cf3e-4437-b9c1-32ef1495f362" Jan 29 17:52:23 crc kubenswrapper[4895]: I0129 17:52:23.163812 4895 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8flm2" Jan 29 17:52:23 crc kubenswrapper[4895]: I0129 17:52:23.170985 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkcbh\" (UniqueName: \"kubernetes.io/projected/2c2b28f7-95a1-42c7-957f-4ff551fa8617-kube-api-access-rkcbh\") pod \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " Jan 29 17:52:23 crc kubenswrapper[4895]: I0129 17:52:23.171130 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c2b28f7-95a1-42c7-957f-4ff551fa8617-catalog-content\") pod \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " Jan 29 17:52:23 crc kubenswrapper[4895]: I0129 17:52:23.171164 4895 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c2b28f7-95a1-42c7-957f-4ff551fa8617-utilities\") pod \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\" (UID: \"2c2b28f7-95a1-42c7-957f-4ff551fa8617\") " Jan 29 17:52:23 crc kubenswrapper[4895]: I0129 17:52:23.179006 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c2b28f7-95a1-42c7-957f-4ff551fa8617-utilities" (OuterVolumeSpecName: "utilities") pod "2c2b28f7-95a1-42c7-957f-4ff551fa8617" (UID: "2c2b28f7-95a1-42c7-957f-4ff551fa8617"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:52:23 crc kubenswrapper[4895]: I0129 17:52:23.183122 4895 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c2b28f7-95a1-42c7-957f-4ff551fa8617-kube-api-access-rkcbh" (OuterVolumeSpecName: "kube-api-access-rkcbh") pod "2c2b28f7-95a1-42c7-957f-4ff551fa8617" (UID: "2c2b28f7-95a1-42c7-957f-4ff551fa8617"). InnerVolumeSpecName "kube-api-access-rkcbh". PluginName "kubernetes.io/projected", VolumeGidValue ""